var/home/core/zuul-output/0000755000175000017500000000000015140043145014522 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015140060321015461 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000330266215140060204020252 0ustar corecore`ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB…Eڤ펯_ˎ6Ϸ7+%f?ᕷox[o8W5^!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@կ(8n]rMQ ;N>Sr??Ӽ]\+hSQזL c̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?+'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4+hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{:_d* |a%ĉUHSR0=>u)oQCC'ǣC~방u)т˰vGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;TC.a`@/t[Edso\wz|In;3&'v]gخO)0{ zz2 ޕ6ql?N/e1N2i"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}޹na4p9/B@Dvܫs;/f֚Znϻ-RBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+/ػ޶dW_BGؙ;f)![oIɲci(fIϩbuuuUwWu%!hcQw>驑L*%`>EIL;|-X g| p?V[xDc5BQ;ȋww8zhcR8+yF=h b_e=ntegYze<*<L&B~f%"HhS<2S@FkO+bqËU>ލ?VSXs*1l;in7G?\a:*Q̃/X?/?]jxzE| I5N%xe@܌WOfgvz@ ]RmtOd {,ɥ5iryvu/C\%%9eIe ygqYR&^T\ u @5ϱ>8j؆턁2|Nk -]Է5YQ\B[;#,~tV \jtݟֈAOL7= ??ݓIձ_۱Uʱ/rdwst/!^MޑO ;Mmb̳mKӇeg/^SS@jwvkB ejX)uŵ0 }#yDa4OX>[/R κM!CJn+!l:UݺTKݼTKջ4j VQ6_*+` 5`M*#mplp~)~bI<}\_wOԗdpDoďt6TݱKT˄kT$[ąE~W@ن3ludvQ-hT.sKI]g^,ijf:w oШO1)?T0?M^=GFD$2euΪ~2WCSZ=ּ ^O׏Zɋ(U?3cu]5w 7Tu0-;L\b ]OƨgqCa%O&r3C -,⌳xGiC2- <"K/6x7?Djhf~|(є7J" "0""X%W?Q9Y0bR.: tI.ʚu1~-zE2\1YTU^ݳ bU9k,7H ?:M'JЊޝg~II 6cٙi{sakgY h}Aw&문:"ĴvfrVW,nN븊.j<4M)RjX~[ƯDw%y,HV=*Fᡫ'o8cI `ZHkndQWiA•YgtU'a~,<'aUU@o{Q~tqr\D\K2 NoݱwSqT2?](8q/,^]:v r{G&)Pws^` < . p8%S)(`цG;~?;3DU!XrS>$2ѷYnjs2$2:Ba"at)k.=}g$*5^| fYWo{0EyӃL"]c/ fyXQ} guw7w::6.ewkv$XxCLء}0DP2 ,s*h8ګU2163 ^>5IEV)EFܦNX`݀٦,xr|D`QyҪ4w7~hW-SQH$M.m#!ΖMbaM]6J1Ɏͨ &qujѨ[에DBH$> gXAdi"dsy #\Lk7$>FG+?{V4y`!YMOԟfi4`L>aa6D'Fu;O)Y@Η}>Tā(5x\:U].<O W%x9>l0qP5$*J 4!{v#71}ډf8*d!>j̍ #~gyxH&&|Lr;jw`§p7 M/9\)iS׃1IDݽ9v,e?%MNIesij 3?9wS H*X7@;ו)ry0cz,ǬbY \}|,PvnߧK\@[⚭hR<?6#yyۇ7_OiI 6R% n$U L~YJT( , bA%i[%:$+M c i?!Ѩ"!| *WBLJKf/,/2>0M񬀉+Kf(Q49_ol%QMHlM¦U6 `i˰XKS&'uxо ij/ U*[/Y9tX28WNChtR&L`]2f)=jp<'X㈢d~4Z/ס!;aМ˨̊ۻ.S4BFoBE.a78b W|[)C,nʁg#!/4797%e(s\xŒ,[tM02˛=UfyTੀCO{ NQ4kfxz=b^s:AYӜg{4d]|U ;'F}$VybAȘ34w:ap0=O$!.X^\AݒgcC涻*U$b陲._jbL݅)+ow).ŕ9FvВPݕiJ" \KXb(ӞiI_4Q( nkx|*Fd58xK7]̞"·% vȂHz-||{g僸U?h=YdͱM xUUuv%0@^ seJqb:\,[lJ5z 6HC5%E|$"sXV[|^mj<fE 'jڃ0[t,0*K/sz/%#ZI*î6UUA @!xsZ[D,6xn2GJ,41Fn @ րE0.A!f \lVIuzb]oS!Lm'4\{^1\r%nesѓȃlen0\ QS'mEX\}w~]n}@%+$ @&R*uIXD.ζ܇(eELbkE[5TOyR 6Xh@ MӈPltO6ȊFvnmm^h,dqfd$6zt`(Jj6 C6,Ðb܈d/Y9Tɲ$AnR4^m=EZؒ!tqHyb6oKAۇn_AU<>w좥Id}A%l/^vaOIۀ=ΐ*okJ]}MZPՃghH.1ӷmnqxxYb,僤jmYgW}X%,KEU-шȒz\Y6nd}oid;;nM/ٚN'omP}J bHsݽhL gi`d=iTmoitZWBh .Ģm Q7Lz!0LnK*ht 1H3{1%K݇ @Vy)<=8Aه@s-_[m']Q q栭QW8ɍxd&Es'Ql$oH9OM[Cp!j~|Hi@ttwY[6.@fGuE]WDzsLB![}Hؗ~v@29*$w'eه_u%2vwsMDz9Ip)$2/E %󍚵#+w)޷@1y"Pq}깆hTrO-,1&+vkoJM;.=`: [ùUʒ Z$JM7<3NW'g:{-xXblNBaGݓť_ $?{W۸B+lR$ (G6yɦ=dIm$;ܢ͐MkF=I$~fHÙ34$\}Ì%qlc_<2N|fVht'tcTq*"_*D*Ğ7uYU5.?KH:W70 ;{ Ξ(A>>3y*O&@Ů'q@.TDU XSO΄@ ԒPƵ:@ Ng{ڥg>[.npi5 @1m= ~7\]J_{zʫ |.:!KnD=IӮ-q!@xnψ1@4?w`0ؠ"L  졕"F @1ȿr ) ʆﱿ\reWyA m,8;PpAi(A@pxGh\ c.\$qT.xG]-[>67|&z:P:a#:1 XWl<0P܍x`%3nЊ# 7 Yh,kcKw]Ix\ήO'L1Zβ et.ӿ:^ZO׾Lz0[Q#EB:W4, 4)LO%35Hdxvt]p`cT ILta:W3w|A&sϫ7{Z|HZrD 9ZXPFbL$ò}Pu*sn'vHM3+<|WѤUY6+#x *ԿAY #&p}KA5QN&d$dRJ5lnl6~9 p7S]G\|QIHznc~|H %S=F^ey?[EROޣ%[W?Ѳ$Ź;Lu̞п: ,A&s2R*A0樌vRm.>湸]UrPiTި5Ia$Yg:@\5;{8SڙRZx *'WW/AKG%iۉccܘ‰tEOm{]m߷)UvZցQ/j6.|lx(~4!  S\-kqu5Ŀ2dOৃʓl<Vd'Q?ɂՖPcq|nmɾ̼x̀ъ״e ̮`o1P36ZZkcgM\F)ngfplp˲ɂ:I=r8˵5peNjd|OalxGr|O澂mT?G|) L~73 4+{;EyΗr0qx(^hJt/#ydXD_;8*O/s}緟΋; *H?&Sv華S T. .:V7ae4HO;OŠ4 #{tL4H -KFԃ:er8@쾅I{:A( \xs\at4J_L FoøE74F+<c.pwP~'˞ЕH:P\[vÅK#`},$@;鑛iuNy" Q7rw/5rh|Olp}ET%e3.r3?=F29(Ѥ`MI67( 4hȅԍA3"]Qd a) *lz*Gq"Vz˺jnZ #ω|ow VlɜF>2V jmv|F/ovvc,|mȸxݳF8&QB (>8Eۉ F{<]iƑ!<)n+V4.R? hDhoH10OŎ'S* S BodmkL~nQĠe𮡜񶢾㡦#^MRv5I ڄh[E**-h{g,c1෦lPlBe!tmH>cTQe+k\v=dh;>RA?HuكPwPwB uG{ zz{O8B _%߃PBH|P|B T* TOx@B{Oh8B 4\%4܃pBHh%]:è,+CiOϓnB [}mלj;p,-0pͫk$|vSg$Tcީ*R ={V^ǣ4+[^-bz&]u槈|F~y=j{Qx{.5IV0?3 y_<ʼ$ zl0 mJ^4(ԟ秋''ƃ ~QeL|[d*ePp?9ЖxfyA Zv+i:k1F͐2D#YT[v XoUsuhYڅY 2u|zMdF10C-o4rߏZVdNU;KHU_Yz(O~ӫPLRhݻK鿭rDj@|HyǞ$1.2yY^(>KTGz m)LB?h':\ke6IYGR$޹~j ?P+^A2-'"o=#GeaIC?ܗP*IG>?uU3~C>H= #<ɾ?_/+W=Y雳;'u4݁63PDA8=ÀxD׿<&1 p=LM|/ނY5u%?"ˤ%tZQ$c&8CeRZҭ_[Ϸ2hd5jb}oo)В C.ia CtST3[jFx& e ʒ,!mqURfp'tZM2@@3TZ cT  SHi.%vYuVAb-j > %RnE4;MU o*H7y ׊Ro 00n2,T&4a\ʻ|".ӫ~}VV~|U|e'+w`ZW`Ȣ=9P#ӎ,Ki)R{Q-/1eTy$~V 뻄 /"[N]ȈGEK<ڙO l&u#4phJuCVV 2=Lf`yF^ÉiGx˫Nn,>ꝣN80_KI"KUYgr>y| l\\"ͥzK@ȧZltu iVb)|}>Xcnh宣ciqޅ~W  3>]T6K#:VK""JYjanvzd`鲘V<!?OnT*%"΄mTJckY)o:1ꑳt9J L5FO1i&:r_T?v}'ª1l|Az,U\_'Wتʖv+ԩCa6COv\Օ 8dԒ(|"6tE>r 7[vۗUG#-x0U m<:qۚ܄#`lk"kag Ai+~ݑWdd0}[W+7%Qٖ1%pnڣ,I֦n#)*8~i=+aAH 6¹yA l/@mȻ޶$W^a,`>0|2-RHZ=KR+f28U@tQuԌX_68L_Ozǻ~?VQڔ=r»~{x( o7 co~woN)y[_>x3u~aݻuwNO+zp+63M+>y^L33o7vW8u~c76IG鳟kO폷 /4s@+i5ˊ΃:'//ދ{G{[ĪssS?v N5ū[μ6|މtzkBd~w qCONҼ)=Q616Ãu{w'$аʟ~8juJ">0dF΢šrc$vaİ'8;Y`ak'wu8Nᆺ y8Mڛ(X1y`ɤ42e-[p:a$ y<Ňg5DJ2KVA7H:j" pf!vF|a4潮LdFqa$Qs^&fXK,ИHĪ&H.5k}Y!ΰgA6*,pIاe)pt& Qq*fO2Xј'ᾉKF; 8\< aսjbM󬃭,NjsmNh+ሑt/櫡K1Gj)VRb ɻ/E"$ Y$i( <#c <;KVpJ0]{ yc^+M;Is%Ʊ iwD#4Qxkp b0"qDHbW.}np7UJ֒+ڲ|9]H0F 돰cS"@Nёq#(Պ !WV(%0b&/ 6,*p;ڎ!x$؅ 'Rs5)njqekXrֳtɈj;j|IdKu ~i x ʴoa2#uJi v# g[eR@, 8q٧2;v\^T{}'znض/ے_#^y+X҂CV* Hw'8$ dv1E<2Yi ԿG}Tg5JDe^'s _{)aY.,\;1mG+Ʀj3ZU5%%#K 1Xѿoբ³YAJ˿<$}XpQ+Q%ԮMo:Jj.b4S,!5*GT#ItGȆoHacNόZWlձ<` g\xdN`$VB|$^u(eў3KQeᆑ2A89x8#xb5KłalmA.lgy) \Vh;P><d`:%0]r1Ĺ-+gftJKYI]qZҘsM1߆n6St9~ ѹ<'b7wʳ#1WN2Q8Ȣ1Fҥ2N$8N^[391YGE.o 7ipc<S41b.?Nak²fߏw'2ޚ*d!{Gk)ee(#FlIY!k]zpjw1~-w$ah ߩ0i+,ll؊*yb`ğj3x(T$ vLJ&́[ 'FšEH+*IߍX $C>Rufbh_+L^kvkLn~X̍$ԡ6mL6KTQRIlGFo([D8ykT{i<@LjSKHƊ,/i@΋nu_ ݅tm<0Qď֫LѭHs!?q==?ѭIO;Vb'=rݵg'p@3~z^(g`v*|K'/p2RLHY(3tCFF6j0ʵ E|ӡf"&V7PkD`YM81p^s:AmbêXT9A?6?Z/T2KcJ%)~|m&Q|{r=sD{I]ۚ!r[;fʷU²N!6:00v0K_"6Ćw^7[k샷 8cæom, b+ ݲMcm ^ 5ewO6иܸ)+"08&3!^/3 _/.Eqf1f&vN{֏վXy'AoE<F ] 'kovg6j91 {)}̾pmYQ-HxGR6ȏ.^F&h!W.^jDG$ b|4^k/X@F[q42xn`XUY3]kr=r |_- pX ٳVRg\$BҩE9 BEI]N|% \X_߉ۧGmx\tZ(0Ey(뎃jQ&}ٓx6Jܝ9bj1('T>&=xk׻'/6BMrc" Vw fèx֓mShIxѾ1 T%2}3ʞ쒷ȢFҥ2m!{щGS#qtx4(e2VEm>bVo?|[sS>n-c,>Mt>lI=K@`VqO]$feenfkbo$vvSH} o0.[$8np ~ln?Ci[=loOYFRCbU1EBՒ#' Ao4h[K?0O7=*M?d 7j{N4휍wbEJ$C|,-B~ Xk>ȖE!}~kvY '&jӺVӶSH? n*sH*CcnRheqH^O ioG3owo# AvvDHQKfc\UkZZ$Nvmc|_7Onp)nK͕^VjG= Ffxӵl\5Di,1D륄}b~pU*s+2OVN_| >_t.cyS2@, ׾fǿU1a;do]Fa$E@ 1PqTe[0FÓxk|2rNqUN9B 7XV;sOgTcD=BnZxKdPhxܳgi1W`2ﻗ&:..{]m-d l/fd/v ~9UUDJv1,`p~TI}enۉ bmuTTeb8 # k (9)Eo?O[6(9mO ߍCg+0Z2w.Y1|F6|=q#oHUrgF`23:4ES؛V׭I^?";hXd[s_^\'sP8誴Y[XɜÀԶ9 JIO٧ sXŎh6%Adc|h|K(PEIl[y LhgԩLcM{jX:z B$Yᩁt} ᔨd-FqdAjQj {^)c:$GqDSf˜P{E W{s>X{#]6$y*) "7F\&WnDx34rg:¼|s $`Mk< ’)L( )_] '1D9.諝Bl:0 nf3'XKDCd_'ҝ-I(۾:3:n /Oq`Gv%U_i!tU&8#:MSDtQ\сFy&Wߗsc6L#0.sVe|֥87\Jjr&h)FR*T/#%+xKgguQtZ>TsYP87;nov{{6x5GQI54]쟋'|!8n A\|$|#}}J:"R'rmq  :5hiYkiS<.T3k#Ԁ@2H5ijTKAS-cmneQ'W x_Ko{(2/`CM|P\C1PA :sVArvT =`EpvqiD$'n^4&ͅ[{=bm;{*+ sk Dʟ:v(*Gxit`1: }:R#K}#ɔv,I._詯DzW0/^~z!|gO^U6^bk] %1{U@2l;&,Aq32uZ(]ԢĊ,b`a|5 +/36cr8o\3^7}FYI%aϣ|<&1Il}r## Bz)0|3>P-^*A(W\!g ̐W8 )Sf[ϬwkMCTs>7Afۮ۸'=_upTXdt8ٳ஄]ȮuK 3OlPw]YYij c acˤBv$26ʯm#qa DeFHJM'lsJ"k {nٙCxÜkm 췒jd:#L TJ.:O/qdSO wu7; c-SXܤvzUV789IA>yu 49E&GXnc|a_QfRr3Fa˶8R}p. ElW97gSӗ㹶'{.xBxO%ʕo0aaU3# PrѫHϨFjZ҂4tH3 & 0%1rɦLpLSzSǬIrBrK $Ds%)7\!&=^pW{ N~jzL4e*BP29M%.x5!N1Gb&[|t@˩ 0KC8R"60/B`C-@%JB:yYgA3&rGU[k_JZZ!kqv$#dO S.Nmeױpq\xyp\WƸ8$Z.֘;UA84W*w9[(art%#&8:,Qupb + (*D `͡`W6'ʔTF EcA1EV+gdC0x+38׆(ΜT@% !!b9#Km)kEcvmPցRJ(DJPRV\1lS\aR8m_b*4٦yjNt {(;pԱ9vAɕ `t3o_б!(xM4m6|bk!L $g3Gq@Ą4q X)ge.1}=͆O;H|!1'U9,TA kDؼ|Z%{L (oq9##L ]0@0N'}#Ӱz+=RxDPV8a,4z^l឴,=)偠Yxn8acL`0*G8o-G]A!iQJ2/ .0IA}$KZuw@f4Y:&09nLmg#7-`-sOA8HaT'覸Hr{ ǐ:0C3RtgUv^v`\2!n Fnq%K`iwi>WY9٣aLRL]˩2ڶԢ߂qM]mkd"V.BTӋD`4;n9U1 ad ly]MX2d"wL2SˈL94`0mc8FF(3ڧ"u<$#K̦ReHeIz $J9bjM$HR)-X#PO1 чJh s}0Ֆqf v:"wKtŰBOdWA - G8-k_7_hCS31f578>9;LJ1'cB N%zgG~K]wǮ"S,,$p L;'Uג$d/6f%-7ۑʊ.9V4ee8¹B :jF{XL@*BT3 @Rl60 I;AM+MV%;SKIۡt7C6ϝ !S+nj+ ye FINOFih03pUg]Ї; JkuΫĒd7\ͥܺ:м0ؓix'`v(ZV'>mVh7wS%n\wS #4x 9m$) r4$g- Ɣ5[͢VVRMLBsl=Kj.Z`JN=紃LW =PҶ w'DO.YiM$ SB'|WDCCNƦDy_Q@-_U VW\\v%EJ-ɂY-Zb>]oߟB-^b>1%\꫒l`.]a8_б(u4_&ԒR|X)/Q* rǠ)fbg$%t1ΒG&Gট Ҕ>*N t̅P?[w%-\KhmPa}ԋsս[C!21}$(\yYͻ- VԜV;i"[էMn3d br3*KK"nMt_o4OBK; ^q%ZTS@qn;Zm(d[w8ph1o1"qz$ͩ60=u65GiW(QE(bWQ aTR=6=Ֆn`)kgh z9Z'j#-2ad1ּ -_XUR +VLtFX.]VGbؑ&IA,oݰ?꜕OqMH5[noi쵀;/XNga*yf3ZKx7c#q~셱N|N8Z f`IܼwzOp8 xi@|÷&60O[JcLil_yϿ暚XwXlJ^#|Ѓn6ن}4f5re6nu6@]^ߍ |v @'{^Gsr3UPވ"s)?QcLHRi(.^x0M߾.|1Seq lq>8m˗>]Q~ W\_ͯ/"x^}/_QMF>U` _Y> Fx9sb>OU&jG?f0u?#Tl"K4x^{Gu BÈ]~_>H!UN B=[y+b~{10z%eZ֋MRփ|pa0F2\f*t^):7J~M œ:f&+xχ -(9ggN2P "6/gp5V7L I4-aFb޼*QMŵce鴸 r|EaPڭ D/e_+  sANf0zOFO7 }j^CSG _V_`/1!cJ8rܣ҆\?匰ߤw& FڡoИr\!~~(bϻ)D%5w=:Fcs3}c>3{=|e_ xWWOW6ب*C'D5>Z8bkZ3#hp؎[̯Fɩj r :t,[ji4d63Tph|C{pwM5Jp# Kitos(ÚwUz=tW8{ hKk^j:Bc((a<ٷlrn~ 7KSlX?B7cΑUJi K oV@*L+(ك:C^ xtMpC@3U&X+U9V$Hu5Ç0)p<"f-̺#)J mK7ns`t8jiFyxئLfJjFY>/f9[fI@m+_?d4qjU`ܪ?}+/V@"$҄VAW7лEY9&|sΰ1}ƹō 3qLA_37P}֋R`|>܁ii$_0Kp@'R6?Z֒XuheQұkg Wu7d2̯WnƓڻ O!)E{_cȇ7;]Xtl)XmynqyS+,L'6jX:{K\rZ/'WA(rpA!2ER|V$oBkpt~+6b9F]>!< 8P6ΩYߪ(L$㼨K"RZS^dqݱ8zY~U<4CUN\sqlrF3t:q^M@勨 YTKEmAY\AZ6]~LligɴA^t|uJ8cH9CZ=*ᇯUAp?{b| 0>󸮽dۢ^qr/_me@*RpBK|qC Zٺtz|CBF(йUAUJH*=**™ B*㠄W.ol8Әڻ>o5||%*Us_+fU*\qmt T{T&ZEU/}xRwWOnc3<# hPO}ldPq <m?'&u.1˞1̷(Qעźsp7wg $>>'|_cRʼn q{2;I=KJP$&\TLGUJ1[.@W&8p*zֵc5W gU- XhJV%p M$g^DIJˈ^: ^hj'& K9'cfv&wIY98Tdx}ۏaFl*NJJ&~AȸJ:dC c,^k`-J:2PP)ONAV\dhu.PfI !x%R!e ouFyg2Q:aj+Z rіhcT^2)S0t& D 22<[K- T@Hf-$vw>4] 0c7TY ORe>OPc4q()0z0.SM!4TC i Ɓ 1$ hߤ; heLh*ɝ2f J2d܉*BP;N@HjO=ra>:dXw 9~qK;TД-YwF]yl|@B&aƫE G]iҧmpd髒ѫіBA˥{ wS@yE nX@ƂB'ąQ$RDO+WfdS'kENs4m=GT䢃^D<Эh6⣇s"Y ԏ/y?^րXE^t 6Yre5k GD2 v9;>+o1K2!+pmm,"z$Хp13|B9IyH!w9X7D /oQ#dK^Ho>`,CJ1i0 6#N*G;v, da5(ɮ$!cum>I1h3ZMV5p-i΃tzP^D pdI!rƢHΤE`D-x}4" <@{pw F)0CIaI49cB8*gIB@Vt"t6@k.PƊB:g1MQ+)Pr[4jRKMXSmX@ˠ&pl@0U}Wѻ.0Cbzk&4l<(kG<֯E[3v9lj$*  @8fV)`U!lLLG`s2huܵpk̲?kRZ QeƄ7}7jx6L@ŒТ ED6@eJ e؇ܸ:I].j*S:H ,jCB6(zDxkz ^m6 zV&b.UJ&sQn^CQK0Ĵ!V@fpߝ4w}W/?_C"J60gִzs^m٭ۗ~>V>v7[b${O ͫy-jώ=KN}9{oXҿnZuB%;=xCV׻c >\^yQ!}O:$弓/Gsy1:$ V:N:$!<ީPkcB0P$")IH ER(BP$")IH ER(BP$")IH ER(BP$")IH ER(BP$")IH ER(BP$")IH ER,2%鍌U`}z#yHot8}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>/X샡~IbQ_> >}#}x4.Ob؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!  >Aq^b> O?{Ԛ>$G8#}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb؇>$!}HCb}s_䏳Wj|)Zz~u`^6I ,z1ڤn/Eԭ^ԭ4IM/8}ۋteQݟDc^y,Vqmrv\?WϗO_oׯ?csN>(+~[c9z8_mcW'ebq5A+S՟;m/Q[c/slP.Q2O qK7;fwqv0͚ߺTO̾9uV?]m@ vZ7oz_8˷_.WC 9ÄF7ڼ k gomPbo2O]PAChXS}J?Gc]_[Ϗ}^Aߝ_r5/^iZ BY}`z12U2XbjŴޝ9we߽}W#B;o*Ώ?:}G_gH~ o5?\Լ[ pQw_}?߾˿m(C|OGx~#q˶nOI90Mz^s?;od{ 2sn5l0dF5l;3;Qfg+b5nd̗蠖Z4Wi4WleksBwE #wÏuOO]JB˒:!5Efغ|{.w.\X-$QϹɻf>;UV²t 9Qn;=;Xjh.)-;?Kh)b}:Xh._GKdv,g0</E7jv9q>9uNN>U`\154#$4#NL-!+wFߺy{nwnq! ڲqa,XHҳݨ;8k߭?ҭ s#𨅘k<,qEzAr{`y ϙ*xp^2 79cgcNl1V{ћ51َۭ:xh~3Zye[Mx0?e1~ϙG\C펏W2rm2.0+њT&ZY]N#wÏtOGC_4t sfƉG\)wV*ZP7kb,ZpĠ.̀[9gϟ?֟^>hF/fdh))XQ I:=;ȑ+ekÃMQ*;YdYJf#"1UnvgAhw4jtO\d|)p6SQZsRe%~R 1Wp%ӂČ6Inw3QX30K-ʊ NdF|"C"wB_wy+*\NI${4σ-]eYnM&q9B#9(r˓-"6%ь<38(q̕u!uѝ$SE.<&|)2iX".o$PLW o{Lȗg6=*ۄ,M[1%wPW{Z,$<@#@"wr5WUYX/UR3pkY߿3b චC?i$] >UѽüxxY:GB 7t>㢪!Mݏ5u 5rq+OÁAL>}·M7ޔ-A?^ zzj=o!,A۶7I3FҀ._7#0A3aVm T?˨8D-%!%?|,>_KQ,Fh oNOU濟16R_եj;"Ou3/~?hm,p$b ~_Nve5x:m?#7m.c~x._Ty{;%7ʿ߿o+vknX|~i-ۘ~W(xW\7𝵶n C(aV{V'B3V2?Б]$sɴ:.1FAic<{ގn£s ,Fر9oz}9=;QGw)UAL2Z+)n T+0u| '.NW*hN bhI]dd{%e)tA 8 O/Yhq"Ӊ芳Y{]s05JRVBi%5a2j,eDt. TX5FOUQP,aua{IsD ]6bBUB>eAl1oۣVuCo뛻GwsЈo Ttӯjs<}tq!^ԕ)qmj`.2-ςVJdYgqZP^aq2"qF*qp_FHWĸ/ ]*x@#.l\9P^B.^t5KMSn*{]qf܄pmWzu7ycDt;.v!TѦUA~>O#*AAkaq|VC/78'騏\J7]+|L >KI aNY0m}=;ZHˣ4X*Gg$l5W֮)cLƒz"3хS 7ulX"0b@<7B룙:zb+N:L,]-*W5>\N O4ee8K Gvd8 gʇMWɅ%{I+FHӌVDUE80H)rKin25q9B#9*r>_xyxnrv:~\$eHg$X 9Y7C|'hX|[[kY[9|}?LShŴ g[kؠS0j=}{ȽupP+x 15ļSe1wz|ĪJCAۗ%k 95wIo/"7c~$ݛ~d(OPl#Aؘ.xwa,E]OsU5 2:,ڢR,r69*QM%U0ԆɅ9K.DSeК\~F=c\%xmD3d9qHhwq` \$e]Hi'ӊdGw [!TUaʴ%*C\X/ށ6gD{m.ycDmWX'lK3ya;rE _a9>UZ$Hчi^o \G0ƾ3޶ [{ƍ/*`3[Luݵ8OImH~?5ŹfGG(ÈP^fx:sBȯ}9o~oƼ*51Yo y_ PJU _:^*SEN} T; NBhfKtstr^ߠ#`0ka$w5zD>aFU5zX (¯(F5 c*oT%}ۼv 4;xc K4Nfk廈t2nBA O >ip`M -|NCqACYճ.=!,aX chB-jvD?<ՎVPʲ ;%υ>㾈hEB9*Vj>Od*/"HQQPYqhЧH4D#`hFN^n N :c ʸt/:\Aue< jq*[bbiA=~IƒӐ%Qkt![]"|:]BOlhKخU8%; SL&9WcT_5 u93zU?.K'v^!Bz2[mxٳQW<7j> xNINO[}bP:/9hrlϤAŘ*'5(aYP9b,'nt}-ӌjw,tU\L&.`%/Y{"ji_ 8J(NUkЭ~!0z ccYP֤YËqn pwS<2F~:0KlBz޽#c88)- Ky{tr0{-s| ˖޴/wuu  T 8^rCyE}ŭ(-u&3ӉM #EM$7f6Yf&r Q^gN%Lk4^N^p{wZŲy-$ ""'ra*WU_,.CS\P6Z̶53*8|}F5f%-=qU8d_f(y1i DkO]QQS{aAB43Ź*hl Zt$uJ p>Iwh1D*F Dž '_u], EV”9q9B#9*rN_q{ʸDzD㫟̭ uS$rF$r >U&~4ȡZ`CNGD%΢?'>“/^T4uT0`ʗ zFC" `yv_Ӳ?יyךJ6WtlD6___v{++$1˘zKƄ=B.vFLR))~QbeQUjK)V(mݦ7 >Q9xe~tSY~'mD,m>U3r^1E=&؈mmxI$fH|Zt ݍT3MQ*킪wqPOK՟#n +뽎dL߀lp| ߈^$Iz'fX|Ռ :]2bńlkn3,kBnM7$rF$r >U3qyؽf6eVleɁTN%5$q_!C;eLh{UzOѶ(ä*tET`E 2Khĉ|gABZ !R|TX=!m!i|tM1sOZ7ʫc̥3G@Ce8<R4'NYg? `A)O`a-v?0:g,:  |L~rVqp^,Z+SLۜf\\ -.ЈH|9eb,;5o114j|I~|M9?/C{;=9U߉P+ҫ-Sظd;97nxU&!]|xM͘a % Wf,#.[WYnn] G Ds*eSr{͚L}j͖!Wj\6Pkn>o>UV*s*p+sz}E\& 8V2ƅ*T*|mkNGfԍG仳[>U3M%."xu3fW[Iu1pق|o|t%!UN3ZH[U8,T47B'q*M%0s/4fnjff*wOǿez]>7޿Am#" 6H#sXcǰp/DʭXK+JfJnnq0UT_d1cB26Wj; 4X#mb=X{zZh7c]E6a! õ5x4d#Y(s5] Cuf1f[𷐤Bש L7nd_dë܅5vrO%p!4Mx.xv.OWld̰I^JjN28DlL#uơZ\b疮L Ś>AG1c}>{۾3#P밫v{!#Y98gUnr_=f6aiU+p B%ɬC4vC{.W!=ږtɃr]j6Cr]Kiۢm]0Š= n}! 1eU 8`Q'M:pl!9=]d.XQo͈ךEU5.O+ C<:]0.8ؕ/0ܻ5x 񈃅܌ ]ҾDRvuRSk_ W.([%/77<dmLyo\Ä-rmln4-aeDܕL!)f*Kj}?LY^^thywwՅgzIDA=Sf}_a9f(};ң6Qt_S?k)7%2Wk6]VQj-^K.ޮ( xC&_#$f0ͯj{uZq5VeYQwVëWN_q{}< ƵGo/񊗫BѢ!ۏպ(]G_24_m"Տΐ=P;/Mu`z{Xyy68EBÿW@u[+RiԨhuuɥX[!i 5eZ?T}ݰqSyR>\m)ޝ4,+h-hUrwm?{x.%LM̶ H\nѭKT}<_f#'j1c8"2cwC<:v|> Li%#No^`BsYqISxLZ;p5.Uȏ1|/Fiπ_}ѵ43㷗*T!;j\/:^9 )H>d5L w|V'ԎpPKx49?h!xOJzF| $D7>i@M}BE7VnV\9 w;"ܝV"VtF>7>dyX%!'$=޽(Ӽ 22[f e1E{ő]jsx9K8T>qLN"fk{0sDFr q䧪<`W,{W9 d 1mA8E8fDIu.#!0bܔĎ*ne4*G~󥼾"oʕvV y*\ֲrV(}W .yO1䧩#ƀc702E<x#P瀛3]&ϳwGr!)a' 1@}<*=BprO:}:z|"p*UF{(9lFb:: Xa??T-ӌQ]܎]KkB꠆0c!]q@&jۻXho@o*Ik& ! CcIj\b86e&e" -\ێ 5I-ֿ,POaUY2Nl^\ߋ@Es>,jȀ]|՝=m C~ \أ ;oxZ^oA?Qո̀u-.;fg;a395E2 A{<7۰mXǵRoFwm-6|5q gJl&1{Lb#?U|w7$u|omzK?Rt="c*,mq~K b}˨F14B#2cxT#?Qj^e˷E5$2˲5"7kjMո,k~(KHYwۧ&u#+$p-2C[n%#u1XոE^*wU{ ,cks.Jc CSYR~>W5ɳxc4+7 IU%6W_O2`$iZ3o7ijnэj]MNN Cj/4zp#P0q<(UQ紣h|0 Cډ~ m;3 c"n}x*\@3 \jp<h\;rH=.5gA۠NģIBqn jl'jH}!&5>.2]dz&Bӥ ǸfJqKVqIHe]+km{6~P+/^،?g`oB@orJ,>!8w5gwM)s+=uE]"?Qۄs~r#=u91kEf ˴;#*uϬ*4cr< zhjLq'WӖUɗ]j602j=2|_x>@SG9 boڂCPCN( G*M݈\"?qtM>h"eP}Z`r[&戌6"?Ug< r:SnPNZ3.K9WKU./%ly,"5ոdh 3q}rv*\b_<GYoDej4]ߧIHz!b냵9/_ Ș۶ʤ:#i\P7~iUm3n)U20xUOEj2xV+9[ZTCˤF|iT\}n<+)\R2ClC^K.k``8uS@/t#V{-6B6?qPV[2o5CKzA7/(UAlKCɝ /x꣫`X@UlJr8c żfZ2gܘ^n>8_ C&5ֈvCa(ӜsT~}ri刌‡)@vqBn ̮و1'SрC}KQN`i'xm`Kd,2BÖ<Ѐ?[)3naOT5xg/yo ˜or}.19ά{`ZHv6b;g6" COTOY5hTGUR&D;XF7h  ~ syf$<\h!LeRKX>G,3n)S.D=3e:}(.1b. 0/,?->*b5NgWn1SSy&RdTvJ &J\ ؇@h '>Oa;;x2*:t5_D45JZeyhVRn +m×]ϔbĥ#ocM3a1Լ˿| gW-/U#՟޽%\lQ~aTڡ[03 CsH EW#=/ `PT<[\tP&zm~>fo~5TAˤi`}!u XV*gEIab^} {Ů9#c_^4ۚ3JlU+^Z& ݵ_}޽&[jC C- bߗ"y5>[T_ aIX0Er'__?&yw9dZϦe#@.;HC7* ~>|n! 22T%bلE k$ǫRx#Wn׆˂Z* YX"֒&RT&uL0 ]3 qdzG7r|o+U*J D);2fIP;3ш _n{HjMH!8Q\k&r_L(^t4"R.kVNnQ)lG ) e/$Q󧧇y dFecN^CꛤZ$,LHѰCKɤb>!'Dx2 v]-g^5rxW䳤.sle/# l0SWnl%[O, xX6mK,lDEq!F|ƪJMJ¸4=4"c!%INVDZ]zju4ₓҷNi[dwN 7cЈ N"F׎fLœR@RX0&YJ1n893dPH'b&ب((~ Oi]g5ǘ=4"~8 h*!WINxpsA,iΌR"c?8Tv]2dyƒyVm@B$$ÄXIrᜟhYpy -4.}^i1-i{hDG+oI`< ܄֘N/r*4jtQNfQՂzhocPn4~a2o04>ǒ<P7UFdp+&ſc;iחoW>2iKE"H&z(2mZh972jxU .{hDgK޳__z>T6(L K&&i^V’> pf@SA,4(e>t4plP[&?dȩ_\TyG"̦D6ԟhD둮aлI## 1&"ypH̕!n( ʉ^_qwH$Y6#&פpGUt972r^'tjX5-CP*g o݀dC#28lкZtPX41Zɋ,kЈ ^.^L ŝxp4&`@@ Qg"?u:QzrC ux:|X|vM,Q# Tj .urETuDwR N Nǿkz4k7N3.,l+`wvI)LӆhA#$ԭnA9q=4"ῦJ6> C:k;)ѡ Vg ¤d,Ό(d[Nm.0'3 f >$Q +1IC/_2t Ҟ.0GsG#I.`p3Bv@ܦo jD6qQǷpk.]AT:"r`mpêIANaw^-lk,L.6U0`aRFD Ks0'jЈ ѣbwR 2i دm(=1*"BsK ZOn D<5"2hbLIb-K农)Fp8qdT:jU` /b c*1VR~0JFdpFpy>R.ެW gq$9ɄR|^qa^ ּ)#E~R ԙ bJWz@ Fdpo3c&,zh \FN_[sjv4"#N;O:#:'`P֛oT]h6FE0ABaL({)XyzgkzgS3Ɖ!;;{lL@,؉>Qyx\}bN<aId][{e30۴5%ݜ_>e1agHF.שѐuC#28olTq1HF(-a-ʒzyx Y`)q ÂlavʄVxpDp0o Pq]QɉMc`HN}Fdp"2u;FЈ Gɕ\:MAhH[S2i2YHj8D;&m(C,1RJ&?-ш i+M˜l2G珌ǻ!6 &F\pac\PxŸ7/fJH -ֶtl۩%U\OO9tQYăf ͥЈ '}n4lm:qo谑, q,8(2+JSIY![uG!qSiJScWu=~#2fhE3YbȮ}_2x(粇FdpFQF7sF~_ݲ _rmB;D)i1#"N,fp12'SSZjWԉygS*uS**Hᤂ1B˳Ư|mPhP&mPȤmRa~z@"#j-5Di%TAX93;;5?=5Trs`邁 &o;䜤p lDY͉ Th"xi而P+[ɼ^X,󪎐uJ"24|bE#9w7jPa'SGFÌSD.`ħF\pqLt':BpGf@s1q=l<( <;˵)\g,Fdpp{Lz\G(<& 1F.P0E8*0DXӶd|0؆Ƕ=Xվ|z:Cr/"qvtB  Q>$L H}k2uP5>slA56Έ,*}50Qw=Ke~V"kZ>I_Dq z}113˧ F҉t#;\~XƬaGoCwǿ1z t-Ow9>c j3 < Ad\}tvqlڻhGw۱5]q B$bBku֤yۚדut/!vB ;~.Xw:{ 3,RLI'U͜vAoƍ5sǸ c1B<"'Q/,>Uim|Gi/RJ8/.Sf XdLW*I][< ͨH8]ݐK",#Yo%7o#Gs {\'K[!  A!yf (yQ_?CM1Ʉ!p s>^[0TRsS(~~!N0vBmҦ[4L%|l[P+4-*G͢{,עShiLDԐLf\,%ȴT|w ,/OdʼiL+}9^:͗F̼ϗ ڗ'uX\zmv lJPtiweHܬ}PX,fbAg۾ZZ`pڲ!)mTK`F0H~q0YnNup;9ۋ ׵*'ݜ׈w·ty6B8f=^#rzRK‡]|j*L‡9kwUUe ɦ+a9au~R4%0-TN.(x:^)aKh_o4є Ss 1CG(={4˔ 抰$<gPNI HǮ Gr^׉Ɣ9̸R4W?\oNeӠE7)Ue󼽩P-\สMr>+ZjcJ؅CLP6lګU_0:mqvUhGnzqAf;sA앏ygE/ӄ[ ۉ8 -@Kus`:Xauд+gOA޻txEޠu͑& 70/s¤0oyGE4* (4X)5{hj84XPu`*<9Zuݱ8fc,ذՓG =bFiZJwW0ZJ!H5%)ԋp%Pb% *k[; $fW=J/Dw^yҽ3d6nO1ް|g߯+oRGs1oOox G?3o{W'fW)ϭ1a޺>x]AL_O{Zi22o ?$F1g*kvk>wg>Y|/>y?YI&\Ayƣ`8a}0L?b6ݷ?!B5Pvi(B նWm(yL֪>D2yk+b5 T6l{d潮3O 2b\})f\kpCfIJ^~B#uDꈠB31#$EX@,4`<߀Ωhmwi߮t?_^o`tUm;>Ňlwdlݷwn?Qhʸ3Μlu`.nB]7ݒZ[9NW3`U)-u5|bft]GYmq{nc ocH@-Zè;aQ+cT#(`1:^ Aw j*3X+p$Oj' cܛSC&rߺTp%M5#1ܩfۨU^oȦ,*>ki9 WFk8.HRz$S5] Sd<0طF4;V{깺(o*Ӯx]r}ulyuYqG [_qO!1;~Gûte L\"xK=JN.yl̟`u ng$eTU ѩNINu f-9߬']R7']߮P"BCmy }IKiscHKjE͚&̍IXzҟ.=}M믆-%sK6ψ4ʼnWkc;>p"(eZgK[Xs]oJX1+f‰(Sa(`۹a^?yIR^ɟkLQ.BPh¼5"F#(`^Lr:/歀j(DZ6R|T)*Js9Zt(qp'*i wpC%zPp |DRbzW$E-FC07PQ<0P^ys+;;^yj%HZ!sPӑ/8.2[s㾋nKK²0#ZF DžMkf "|T䁺6 -@(X9VĬɴZiN( L!)4a.~P$' 'k=3?B 止]sBz[k3oÆ*K1i ;[ϼ 졔"BP>KbA7Wz&6xb]Q)gPh<9:8xMO4, } %3kq2qؚa[nXiWvN-@HظP B (5mm-!ersZeIyeR0Ђy%h|L`es<<'>Z;A"b/CW10Xs )^Hex )!Dk2tC'.Wd%hc!&hT>Z0oFP`!SM$ +,Qi -̵HEZsҚ|_"%hv3ЄyZ{xc2{ꈶ)$!B%RZ -wҵiVbC2ZsIL KMP;/=I9w]GPh¼i,6ˑAlH%,G+6B uo|QMI&y X ۸ eC B a o 4ptb6L7\wm̜hg:k/zi\e#n9sF[{3;ͰAdbۇc!X~Z8=n*?莯',>~;wN?.YVl[շ'{r?'eCU N9Ӈ''߯ngW]}msv<?}N.v\쓋}&b\쓋%ZZ-{FnTɀ<#n:TppCSt-~WWqfL\m--'\EW}M mMBNY@UoE5#&L$*(?ZMa`Vza溓>w 9%׳o"wH$.Σ-]FRvG,^6-HC-|S )AT;g&͒qy\{rB9UH۔=C\\~1uɖ1FNS󸗎`8Gψnˏ}+`<׽\OztoK^RBQ'a8&t0yQOxd1pzsH{]<`\OV)]Հ4(TYe%# /b"~L̾wj2±?^w q,GN.p{( -c+~- ghǠԆL'90|Hy >ݑ@??r}{~N|PPKh #K7ೳ˪Iǘ@V?\D#^9NLX"B{04UO?%t+͚rP6{Ymecmcdh c7x`;-c,GIx@w4{ JFem*)[Y8N[rџkd34xv([@HB,Y8L7b-^84(#MP-G9D 2w{^Ȗ1FZ8v'9ak'peb68oQAS؛trumcd4- H|vcNh, s^9s͊#2}V b!RqIL@v\>b1%XU)B=z檏23mhvBc,8, J;jd}!A Ȕ"BP>K d# gQLZQDQw9wgdy{VT,_{1E({wQã/erQRdj AprN ( p2-8y e{6 nr~0s`7ؙ\n MDIvWݤ$JlڤhSvHTWU,¦)YRsLu*%%\2N{cwi) Ɔ2M ,~ē+>}sVŖ+*o>|u)9UyXDzK&m5tȆm0]U=@v`Pdg(&Rl]<8Շ_ l/}>Y+{>@%D* U)J-@ )Enf^):64azGMR8o~~gÊ]u{z3D%9扆qH Nr.qfJeLy(#XBtJ knM+w.z^5oNE}NoJ|_\{>hz/FǍoThv.BcИ;CcИ{ %̡mCmtOt:݆NSjZŪ{pI6?,u *J3~0=Tvᓿ۫<YiKv1}mF'XYֈ\W/QK%hFe;se)mQ`hh=#j%<ށ[* | n» {[/|v`Jc0z]|}gT >,y;N._ۨhH Y\&6O_EŹUQUǟD얟< Nӓ1:Huy`6g"dx e !?0v T.w8EX|5S i79tyR=dJXɶ*p܍$NZ'9Lde%k&.K-f.NʯiZ([F" mD+7Qg܄ \?v M%c-c-qWZv7Q3'Hxb~y?;nt~J򸧛V9ݼ'in{np`k9hP{Nx;voBn6eX5rHUMmr}y7N pBؤE`1͂ԦhDs@'4(iBʭ,oȁ^gΗ5p ".ݏ˪Ğx^Q[ ZԦYnϿbxٕqiB9H_|`@@:ŀ (vhbk@>D4XS~X-y[n4{J1ԧG0OF#?#:G{meFtMr'|1o-fmrn{@oણ]gQE#hW L{x ߘS#+C o0@sq(;Kiݭt ?qpyz[u][KMHrNJ&:F:UX)+iNݯX֨qV)Q;ltrݑdwlf]?sOi3ࡍS:Q u#L B(Kru-K / 偐|l!r^KneS{2{-ظ22ý<+YmU4:$&E'h[cZ& ¦Z|(Rk_l;t_J"53>zAחݦ&+  wc< w(դu OG* ]P_Oq2}_8yx`བz{u<e8Bw2mTZ;{ѷMhXOVwkn~QPկ{y.~7Suwg +31"ĐR[o@6I/#^`!wZ^7?aYz_dI /+W9xb05E9+p6y.|c\ cIhvʕdP(UlJR 'IUb)FՓ} np*^&ٳA<6f3`lf6 }m"G g#rx?lI?0EL@13(rΈ(aXNbi wiB6k1Np* "Ae 3H#쀌S8:E7&ZhF!&ILU`"n3fmD֐\!O0E?24yFinDBX&PMf VjTmShR4WkվFgu6S(D.b2] U[K bCKr#TMY΍kD H6s]jo>E PҜPL KsQclba% ieրi-UC y|,q!T s%4@7)\XFL0jfM(nR&@;0u>gK?{Ϣ6rL&`{dw,KQrY̿_5II6%e ƉE^]]U4X"ⲅ9P S8#j^6/, g3 r|Rp)T 0K4v4ՖY{:0giZ+I3pIvW粻;n;aYh`}+| w:8#6(]v;8Qi_\cHSv#f]v`ghL`:BXPUYC=p;(Hojns_.^`X36YS6t*Re:P3X硜C܋'ñ9EY`#:IEQL5#)S(O98|(j1LRD:x*3fag#j} ƯrC8~ˡQ@Qȝ lI1v"שʚōT0)Bf~ºÙ Z"PG˖h'DrW{6ShF "Na x+j\XҍKTQOn(O:_{ % Zm8BW]"!6F4 wybwb.&vG 2t 1< KFI%u6TN9vY^`f,G4GI~;z^d*ܵL3E/>?zqңoei(y5\V-TOl8"ݻ@^g TQ0 x>N!XÛ@SnCe6DF[al\gюoL X/v[#'^ˌmD[?UkzO~ і:Z vCimc?yv =rܺqӟ=r2C|to*Pٶ폎!#Ba_מ!_~D_wzF!xo%n۠`gJ)BY})Ѯ u۷wx@}_"0R~Dm-kzNl(&'>%[?/.Zfoxѿz?|_nࠑ4f1z}< \-כ븛%3Yu2ᷕ٫ny e g&oUW- £>y f/g}GMxpo,U={Y6#ph\yL&~ko| x,z(^e%͉Q>T L2/yrjWغ@kW IJybNE!I3E27bd$M%Oȫz|~9O2?yԀϛ>ܷ!0-T͆a[Yx>m%+V|d^թFE_8{^|MzA=6 tݟc>I6_'_u/e8:Ձp^t׮> *xD^"k{\Lռ7:u|i9PU1?[8T,mg_`T*;;=e  CG+ؐe[hb`Sgϣ0h}udrYN`0{Tpv?XNb+{/GY q}"zOj60𹖌'[r?lG+R#n ?qEWݷ!& BNܻYtXx}7uN~S#sQ(hF1kLM{aצ$TP< ~ۗ;T!#.~__Z8}Bzuf:'C-Er#Iǘ dSXQ%fĈ$Ֆ( &u#X$RC"\J0^&Y$ TiR0dUJ)w:µ  gV#v<Ϗv<]h50y<3*!R8"%aRɌ̤,aҌ$K`'8A |wC [ЀdnAU?rn9bݬ7k'kY'8US( Ē X+FBurgga~l^fd# T δiӌKP6JfRFH)m݄77C&Xpt[I8Aj<|[j7 |-M̵x淉qyE*[|ȇ*c Pvϖv}`w[? 3LzhE^l&a̿ƿlTOl8.Gfx{'G}{w|w?]`.O?|zOY51^ ?j ?5ρ?~l@hXД 2w>Z.,Un(8~?̍/vnrs/STl5/zϋ뱍+0m7L5D1ϊj=sc Z/n8%XBq{w/ԧf/r"KS\ZhRJֆD&g(9Vsj/g!.­%NN݁24a)G2`~Dj1J:cR{fQ-a?e?iX&wۚOuMW?|lk]vqYOf/ DuQ bt̛=zʟ[(W?8pK@!<۶CWT-K$/E<[UMPȆJڭuT~T|mَϫ'PPQ)q^ sCTy`~hCسAOT#1):7Lz^-j260).ts3BX,+*突YL"{0|SllČ%(N1qFʌ&*b7]dazBXa}c \#78KS5BA>ぐHSFER!r| i`ԭrx.5!MZc5&[Zcxt =VŨfyӋ@LFW }!^ѭw--Ce"}Cd}Π[P£ػNP(_:'Sw{߼J#oR&[L]*Ǖ\ZgϒCFPB&HX'>L(e cND6D増Nʋo&}6LqLgHLxP^ 8ɸ>xg-%v]AmVDkNwv"-ehU% #{gI+޾uw53|!E q}/}t8 NqLL4b,"PAa2D&M@?XbDٿ|*ꯝϣ\\ cG]!V]L2Ӭͮ\ hѲeh~&$cB&cdC/c%zx &cų(kRc߇:#Č"kYl|_(4a +eS)jwsWf.Mއ4}cȹ*A,J#NpFS'Tk+ޓdW|iLfUEƝ4vmt`wA:HrϠ*,(NF;1Y{Y XE8j6[؆XmZj4,p;t,&Kv/~h}q|Jve_&':.Mk:l>_ X\`t=-{n-Y)@+je96p-+.R$1ڲ^۰L1Zc7'_^%yЃl92cL N^=L5e0MBeoPc_iÇnwٓ߼RF2uJ?x~>}c?)<} ^f *-!`_9z.~er2O/xA`fqx 8Y ә:9+O~\ 9;A/}c ;%9N?J'"Zc{ZVtQ)-$1ݪ YkiTL6򟮁O\Ġ׆x-jͣG~M7/ҋgד`v{2 _{}(pzlM7OiQ4D1o#]6  ׀>Aleonq4֋la!,Q!k3G6v}=0dl K?ݩZi7(_gwu~<|?~y#,ߟπ;0IN ~kH1[CCc0^=4t nsՆo1.-\røGٸI)MDȻgsolcly}5:|5A#E+\ L Y}'6˳"J`^DB$~1)i+RM6TZhJR%}JcxR8=H <wȂXfJYkLy[*R2d$ݷyyB=DV扦K8lZ#Fxzւb(:8u aZCCe0-wswuTUt5_*:}ԸK].-:_pˣ[ RTJOLו0 myJCtK /un8e \yAPPiU])LA*f^ȬHsRP`bqPckV訢KStSk]`F`ڪњH jIAP@bZ!R$w1j@4-IMgYKIkrmL=Ě篞]](JpaFeh6L0t 89vё#)/&|ƹþDLvтH*c)INuar l^PՉDDP$w[^*'PJC@„0mT_xS,.,2HI)m$@cࢨtɆ`^AD2L͗ͅf]"g1<BEvȽre_&'J.8$:'ǚb%Hau(Tk g[}8Ѳ}ĴͦJjwzrQ٪["O#ٚYOV!4ZqJi=ZzŬZ bFB`,' آL+i#p8 MAfwA3@˔P"yI)!z"O.?~U-(],)Eny D eXgtuJcW|ׅ FR+Og=*M}F1b8]u6ɵSGr+ͅ? L!y;XXZyz}Nm6k_ú`{wn{o5LUSX2I-'~mL;os=aҳӭ>=7JZ/6n|St9=hnw0hәN2 '_t(O]KҩQimN2-?ԳɜxD׿dg9J&Mfe@|^(Σo$-|R_ew gDv5ӏ'Ie;֯m%0=C+7mksAZEU$69>y'oV"7P]Mw h6#%Z~$:tl0z?R@zp8ɭNʼn֬v;X.T:{[v^pS.\}Wbڄfimw1%ۼ;_hq_Nrm 7p匘^ڕmjJ&iZ 3L]/XR{6QȈ)~"r>'Իj2^Y tt^26 M%oSrne$9fR ?2JYm@8(@ cv64B{=#kͧܠAw3s܁|/,_Ӆnz01fւA!4_3E/~aH2ėq^b"hA2ˠ$.\Ƒe_B#9Bm<b,yBGZXyU=pU[$O9إ oIiÀnڸ`G qP@Lo4XxD3y[iLF&!i4R2@qvF3Gh} aZKNmKJߦҷm*)}[.)b GcATkDB2P3jcV~tbao7\QpuF8䂌cfn{Fys7uF;4)Z W)T*6y*ɥ8{btWH\IJ.RDvslFʅQ0}(_daCӛozѴx9 K=LoJ9bͰ]A1 r/EL'h}6 J:14D+uPϫyEq7zdf$?fW// M̈́ yl1&_,x|Q^o>L|ml&oнon 4yewTff+_rj ɶ8F XE? I~hx;F=/AFX=d£xJ2-8ñifzFN:3l&WTʶ7M, 3@.ql&p0VDM6{6_cX#/Ȗ+=o˖Mb~)lseTt;e W/'By1Y-e@&:q#zE"xzdi4z̈V/I7z;ôWxxU XTKd`H`à`(֒Ն9jgRr!DZh, Tq& Q1Rj=wS:4phP8)DC zfF,4S -CRb.10thEe#-R!5V:zYzY^@%`C6x`Uj܌!16򠦠(@saa CauRxS  mEM [ j)3LY%@@7DFoMkA4B*}:< MKy]i[胸ck\Oxh9];ˆ #BXY8#RSFD`0Q4`pHn_@`r8>[~sF3eܷ ;+=8 L\kcub[iĝV.FOmuy  V!`d6B'E=j\yW=nruh,DžL<]c]'biη?UTXۧ1W0 ggBϭ8o۽^:DZ:tcV΄K-6 YK R& rML3f4j|cHbaJ5j&ŽVޮn} vx*)`.ӟ]}jwUO"¸=(5KU\N#આ(7-cS+3 t߿~)/2׿;™cvy:\;3VKPz0Utӫc'zёڶܷ\dKm2.l`Vr$\;R.+]}31;>=WG\-;>b #4g>/]ZUkT_Lc|=͘Rb&jŧq`kVVObiWU* 6N:Tan~N΁raZԤ8y[r}|y Na. V.Z8|=53%ofwWD ,. MX)HR s/H$+H (FG l|Af;Q LxF!ùu;K2e:C+z ݍˋoq^40.wHg%`z.dgY}ߩ) ̩qr+Gk|X _]߾1$L;WJ&H k.(/ qf%*Ű[)rmq1vn֛/I|^(H(֢T(#x+8>jK ^Ȍ ^`A Ɯ\ ád,OLxXay⦃I5 F i'ڎF&x$N0+D{A[DJGt`ZD"rR! Ntȅ4*pbA3 ('&{@ Ҙ68lXcoǴ @A QzHLdu4(@%LR#1s:U8-;N8crEsbOBx䞋VE<:pyG5]"҉fsO׫fEnJ>y QƬg<,  wcy@ǓWvQ7}5 l)|A9f^[)/!ųls1ʝHpd`Kf#U2p#qX\lXUHXG"\JlB O1jveZio8z|^zbpjFtE.U{u5Ux,^|Z}}|=(ޗ ּJ*]?ޯV/(BH︎ʜT)f.Jȵ!Df)Ճ[gZ0 k΂Q29$x &)&PaLGeHFǤ"9̰4N+!=IrYR~ . CǺvɘUY?Y4R81J8VM?0,ikRI+HP'CMu4a$Õ&H4IglP)CUFv"$V٠4I0]Fj۳ ljLka KXV(o$F >e1)=u X\^ D8%y5N Dh71z _/5t_LU ܤ{ i/?}ݛ7%f4jXvu=${?"fH&$]OaWX!S' F#RLݕLƒ<_M @ggJ`PUtb ք ɥCR©g[m0&1id|=#] ϽG |Dھ6/B77ӇGrd )AR]+Wk,9ckB\B'8'S||T{+{Ó~]u,?iz?CY1FgT/q[rOӼ ra-Cb%FDJo_jRPX>0 0Np\iZ֛/M4%mz2+d-l-ܜ(ĥdb)NS,~R6(n|'%CE^Ҟ{Qٻ6lW6-rj30$.0jSLR_?(MEAlY쪮>wNC>/H7>h޿'c>h<5>zNBzT#i_#)O4瑓cD$2Qq<_ dR6zBzI/m˼e?ѿvyZh[QG#HZEJIMQ>'p41i ޣ@C":]U!θ} h④fK)ONk 쥝z,Tz8=5_|@&*62$)Rg-ךROmZ 1!'-CV!hW#[=WKuʕN^WZ:7zC/ FMz9p e\ԩ+:: :2iqQngK50((ai5:ܼ;S!s 8V2WDr9 %5BPƹIJ}Iψ4|bsLIEf*uFryM:r.r9Nˋ{!go&z_O/Îu!c_/Wd4ɡ%O:gCZ}k#`>gr'|U~z'zT#X\=s ?ཕƛas'? >⽌Fl@Әe g" ԉKY;nUww|QĔ`$I\HJOѐI: I @0`)kLHڅhp`2Qf$kYQB=>XD; Di4$4BhVj Y~fr'Žúuңoqӿ/2XfyyF ^„SR|<۶|4|z'zRIV>9 H/$rp$Ee ":kPsRpDQL=9ӣXd#8H4.Q2&U`"A*+}.eѣ%Ɲx9ۏbPB7$:ξ Nq#^ړϺz_d=EOG'*M }{5-Ӵ%RU4_a%ts'ADιkR<Py)cBZ|ջ >k%T%K 7P, ?s-)GŤ*ۊNTl@!P{@!P{7zGF_dvۜyN̤C_Kݕ=} $*T9K9)>hQWYeew #;0ʷHϬ#^Q5eb*__rάYtbYW3VefP'X1uګɪtۢRasm_,mKȸӳAOo̪QW)wNRn^%aWG޴}}hqGG^44#-e.\O:'GfY]y1 ک71a71jd2Y~۱{NY"JE_ˉGH͊gՁPԸ8y 9N F<~?yh3o6Rj8!+w&8$f-aI&4IhZqr̴N.MEhQ.Y5I]nԑ9a˪.ֹb)d53DA(+4=r^1vڐTqJ4B8 4;@Rܙ0Z{5\cbۂC?SRKIsKXsț?)(^}='"Wf8Xj!0S&!,(1|GRCQl[F H܋f:g+)S:t2x5*(6tT"S$HςBS!(th)E/ln[vW9IiUh9P[6Ϧ]zv7^UEr U !9GrgVuH2 k 8lD^H`U Jn)`_BÿU^kxI>1N?ܱO͟h`peO %}|QG4ݛ.Fu2'׳ 5[,~Ӡu=[nvIm#-6wB:f&;g~iX4Wd|̻pm97ɦӼsVF6:ȮY;Qe1809oG>~{%~zSi_ӛӏ{wG|~o?}_?R?8Dv,Ax~yzL?>5RS|󩹆S6|y|yQl/Tn@|a[sѷy|{{&Z ^l~3w*nЊ>䃘Bt*!|?擸}b[ؤgۇMIAx9#"yA4zesee уk-Hzio\5-y3Ϫhԑ%$V{#RbDn #MLh8PȡY)Ffkks3 U"NOWzx4́L+t%%)4)szڵzFI*ѨUYRcln D4.j8 UT֢1BqP!.`OvXM:Z LpF`ʲdXd%05e(D>$kf !&ji.*`jA|2(:JW(63Be ՟s}%[kT8+~9+Ӑ7g?Õwf6*pp30-Of  _RQ \&-i^P :)XdQ,@|V%pfvEi[}Scpa IѾ l0qh5t[:p%ghf.fVV EsyxM_Tw~)h L?n6- &@vYY " Ʈn2Ҫ:Pcv7@_阹2 ?J +B#JZ/+ʐ<qTUcW J[qŕ$_ly;".L?a^ 7_/FI963<8*Ci*.BG7cLj}Bb14;1 bt2Ve 1=f1\P&~=bB*Ɉ."e1BzJp)9 `&NF\%p9qURU+A%hQ84~z8,b~xf_gB(͛3j%rކ(0uTRf T N#ל PYmU  gRX2l ' 'cRnR' >}bI!Lq qO !&Ϲz X* =Q^:*c8"F@GlD`~)JhHœ id!T|/ ;8I0szaYϯVA&1EDCw5`-(MvCwƙUuBK'}5$-māACU8c  0VFx ӬMT9cсiע@,{f1gr07#>7bPmz>97T K.ZKX~\>U]}Sh Z6u%TQ O O=cF3cĜE NV:ܯ]JOr]欖w*,l"m{/6d,Z1 KqCyeO ϮfP}ֿפ&SbrV6w`ZpcOzaRNҶXZ>Og5`{ Hz!$-!SCvm2hkM}]֮{v*9ěG9DŽRn&x9)3PGEd3rbrC @#i=UB;qu6t>v#Ԇ1uS?}*5ГK7n˃rݛp3C~yԀчk9H:T}ioEգգ0vJ22c 1B`pk̭XuRȸlL:^F d<)V2"b ` Xy2&"pٱGwTEfN;QK.^v{i: -~q>7rLԻgG,`3fPn1yTESpMzx2VYT&0>k"NkP%11b,x&%'QS`B<TjI S띱Vc&ye4zl"h6%a1pvKdtWt"i7 $`4C*3HF]Ƴ>e#Jx_&^>V0#)6gF\s6;˜lw_T2!͞:R X~PY ٽܳf=[]؂]jt_ٶ39R ټ`YzZRwL*Q@r 6GXPLH֤jnۃ#=Œ&PM Q1 FSFZ4x!p ,/~p :_/$7o8isjȺ4<&"(ָ}s{Ʀ)ī\rfȭ'>Q)bQzc>rSQlRWῆ/bƇ;9xV;8/޺cc/Jc,>p3إ\lh !R(>ۛ鿪 )tqљ ;r RnQt{F9 %4{Q| z G}Qe|![zjtVO[bwōoo'WՅE>0PEfܘ~u?]zۅM?t;)0.fPJv%ׯtS _Fֿ"?(`0a1iAw7Exf;r֮JU/Yjͻ:Na#iaȥo!,>٠R .&9p],N#:0Ihh&6 x!fimiigG-a9jiS_z%NWKF\(W/??Bls#"oAs MzuPrެ(b2e@1z4#at)mxlb+B#ֈp*dEdL<)w)SIjI^a 1Y,-,xGIn\⍁L8Y`cr4/yzL;΄J^P+v+ӭ`-[=DU||Y-g̖1[|>\[33VO+b(hyY1Րګ T6R6v;滻ِmB%ݟnT[)wLRne t뚽:pM˞sSyF)Č  B>^pjAzOkTz@-V3qI&j\LcmXh)L;͚J :фuZ_f3OJj7P^5nOǼ m47 6)B )Bi#߄sHY(e5+-8#5CN79QShśhśx|vf˪n)mkLwm\I4OƘC(co;xwʌ)R&)`냗$%5EIizgWC/#76†,owhtJU" Lag#gz  cс9{ֺ{yi3)ԴxC~[N>iPnV'g Fy|LSݨ;t'tDQ:tԡuCG:1ə:51C r[4q/U<*1tny$<9ӡQ nMVn֘ R r B($0f4j|sWL#,8ZI}W)ԉkb: $$;8цaRZyޞW;zDma](gȉR4g w++Sw/\%`~XU/ꄖN5$+xm'8, FnE9DŽRn&x9֑aYqGk)$2Pjm8_R1yݼ_SZ;*ۣ==9V+ h"fyϧi1OK$ / G>@JbyO^'oew|8׀)b;!PӍQp1{O-Qm툹(\bdQY%,k Ù*@q~@<(B )7T{;a)xCOqWY\XO 8{_>*~eWƏ7?fAQ۬N&AvmV 3" ]:n&f@r4hxryU?}S~~3>2Wgso@x  9'r.L"D,cguIqqْeoJcO!o@Q8¥tH+`Y'5V ڝpn]T1ٳje^Br*ͼMV-/_G*{=n[$ĞKEÜ!i A JS QV .Z!M p 5{dki.>Ww|K4to)+vEۓ#LQw9w݂pF1r#*:T8r,j9*c!E%7d Ц!D P%11bRk<*Y=VYzVby<E WjvCӕ޾xvhe14$ 5>a.&g]&2}_X$ty<Ƶovhyqr\Okn~hE[V6سj|CÕ284tnɰDO7ݠ/OmS%u;D{SXnm'\ vwQӻ`T/9P^%u+R3Ey*khw0Iٙ"!Mjq^O"wnumvݭ9EpVHKo!B\oB !>,R|$Y.g``ks$z$ Uwҝ<ғ,hUQ&Wڈ0g4b2:u i4'Ka\  /HgE\Wk|G wqʉ$PiHw`"%1,҆* JeXGvm ( ܳ0`4}36O(8Q'F GѪ)g͒F+坴B  #z:PiF2\i"Dt@ AÔYElNag(r`Ib%I@$Nv.#5M]F[mAi-6r (!(#N)6=|6%y5/ /Iߔx,GGWhSF x_8%8JgC <䗐U`DR>q~^Q>eKd}Sv1@9a0/bxmp<x~:*a@UiC(Xl,IV_úi4QPL1kgV[; Sb^O||49Xȟ'cH n.oLrXP0tjTzt:QinOœb_7o.''`4Lsn;\-ճse夈8.n? ^2zbD4Λ! `ZB> ULZ|8_ =;MMN^ Zk=ɦ^J02u2B#cȥ/!,#|ay?n65*&>.>9Oo^?:_ߜzuzo^V`\vj/3kkXc7ɻ&k]S欫ߢ_ڜrC7GY1sS[0w_ OE+N~ҁLkEK+|#0l`2WobT]zF{"DO f<#8}~Ou:kš^k- 'P I;( @K hv,RkGcCN)αꌤ62//_{&'w<ԡ@ sJ, FNH)#tw:@~[GCk4h@[\[ig?wΣZ|}ҤS \/'~ ԫ;)G+W߿BlsRbra𷠹绢-VLƠ (F/fsD9̴4%&"4bBeKydc2Et*))b΂wDPƵ!p6$# da~za]$w$YWԟ\ժ0ZJJO?#}$;gZXO*9kg2a,O i f۠sDY]5o81jt^MTo3OĎ3?u?*bMP)sL t珝SeB5W]A8WJz)ݹ810 f1grA0qb#V7$*={VU]mhWSQ쑂=:I?L +K벊p8ɝ)޲MfOTjD%m{ZhuJ8S)y0ѓ4Ok[ vD 8߶w}@E`*ނ6uW} SnS t@`,Bio6Rz>@edٲ,݅z8pFSSپaІi6i%{@`Xizf=0݇*S`|6VY5ߦ?S&.8}\;l!16WɌ)YKl1kh =d;WE{ƍ,O;@]a1E&lL2_3t+Qna0}_,ˢΒ%Ymcջ6vqa.i__~0h呱a  \ dQ,DKGʵ^\Jٸwb*.$ROJ[E XZy,f=Iq2zz?ח?T2WO~2f/mnԒ85(q9ޗu6VTG#zdF/Å^{J ?s48(]R#QJBһgWx1fg0S^ոA1'yQ XRlϷiEf/.>'ןKxmL%Y[0 #z!Uxd!R&R/5eDDL ` Xy$RDw6*,n/gs} v}NBl m󔋕{ o?ۇTo`xYG7ݜ%-)pI#egn]R>nW%[=[EZ0b4M~_ D|I55ҿ;\Aavy:\; mVGLk\1MT\0+ItP;jsnL6K4 z9(ʑ&;'.8*m{#HJ,7S<ŠG)}) ?F|,q' 7Æ99ޗaҐj <2Hn/l8۲@ͨ=0o3c(k4Qe5E3]8n^XM3Uu"vi_j_uH穆Y@y3ZTٝ8x39N2F.ϻǃr+-š"6~'F͂E2n3R36 ]kz=dÍ={=̽ؿkFyI{O7%chRHB/C̠e)2r~: *NE(?x|!pnEΒE E.0C4.Jcwg - LM7RZ-p ˙Ǎ׷^Bc7WkzMN/C0!B*C=S띱Vc&{-#1uaFS-x_3#$Lv`4qɠikUIJ'z$^|?>CRF ]QQ8@7PNbXX0De6D}J.*;NrՌS鹘d_s[f6ݏ&iUwo-nTIubrAZ5S (g}yRR r͗k! L-eF@`*=8ŕ1(FV#6JF(8*5/XOp).}Ԗx"@ԄD`R9c6p6sZiŒ}yxE9D&TYon._NK7_(`0KRE#KZ fD%$FtY$R:"[nMlhoI'Aa i&9jf`29$ r B`#T䠰ˎ:{C45BB2Pg($^2!YobA% 9Pēn$f.DAc-uQ1))D!XF": >j_aNLRo\`0޾7.LTPJm6>)Dchy޸}߼&n6/ڳ!f+l>4(bfFdaal#?f}jMzgcA$ \۶EW{ƣgyN `/~~WK2Bν~~~'BWmGq(5î ] JʳgW JݱkdWEiWUKVu1*ȥ+VgW Ju'NvEĂ^̛\doPRw|?~9wGC;DzѤa59O g{A {7T'Շ {z;693L$q\/> ֿg`cFÐK!% ~W8v^·|g=a" _U{%cO31̩a_oel 3?lMfn{ޑoCwRzJ0sH[ ST2$d R_%g'OP.8:Kz,9.Z8>vVxp5—®D;JPvzWɮB?%#îbZ%eJvE{2MUHB~ˢ?dnS醴P F>Ҩ VSo:I J GC 4+u1H@M!%=\E24=,e+2 o5}&}߇Mxw|KasWvpʭBR/KUn]YDSXpU:ﺄ[cR;ĬW逰J.^ wE%KVY}L 9b˛ǰ g]zQ:n7qL+(>TszϟE oxmc#ш^*U0laEPlt\wӕ) `W}buE$k\0}Vdbzs*bO>`>꽹E>L rH# be}0z)#"b1h#2&.A;O~]t Vu3td6;6H[$uZ~z)E~ futY , >C2E4^JJ8%Tb%0lA7IwL LKW.[]R>ndtgH F&C;Dph^棆_Zgu);<2WPfGa0,B[Ą+œc\0+ItP;jsnL6K4 z9(ʑ&;'.8*m~ۣOq$%T{ZV|k,xdxĽF,<; |>x_:CqKCbef3 5-Cg[74;Ocv̘RMT5ǁyL7/wJ :C;鴯}ff24o zP@ySZTٝ8x39N2F.'`J iB 熑h!A`)-8#5C1dM0<918&|<>\Vݿ n< GB7PﷄۯO#S2ts,}2 ?QZ^RN 4.yVs™\U἞*زV3vDyh6+Ϧ\5:S404Z3=Lo;~o*S<:.QH`B<UXf2;cƌL"^ˈiDk45[!-֮EZ[L:YE%rIH\d_o ܷnp2V:VF-ZܾiWN񜿚T3n9wzyR2rw1ZɸyQHL'G90jc$h-J- R>RLGm7.t ,3Ik)'0c_^:^xQpQc*+rT߬Zf_ӖMe 3sC-hdI5xHbTH'K`!#a:0- 9rfmGC`\ ΆµUǵ; 6\H WP0Q@H|@N@ =r`iL gE:h9i!8XG!1zE, A9fSrKsXMGiuvLJh ;QlH*ßP-bVV*B:C̐?J>Q"18 A8*e|{1#xG+Q MsVwمZ n^xN4ƞ"B y llJb$N=Q A͛ބu+¥&LIX8FҎY | \KKȳll43jh DkF+h9ެ׶"+mg>!6*sJlٖg3G)9Jȵ!Df)[g1Z0 k΂Q2G`##qӱ9$\ <)nf s0R")9IE$s2JaQiƝVc  B' {rsY+=V{__dĪNM۠?Y4R81J8VM_0l4Z9w 60B'Ar:thz-Ѵ !4FIj@AI eVSXXIdV`$w#ϑ F̭ δ` (!ҧZF0lRSQk䢲}J^q/w¤)5$`~V E¯2{]JmMӯfa'?^J k)0ܳ@ *S{1@8a0?RP{o2|gȑ"䗽;l|? v &;?ƲH$WVz% [ME6XE~q0x2tt1z6(*:mqkBHsTaƭUA;Lc,>Oo=o.mxlݛ҈%zר zu10CJtKV)hr\Q t<:qNWS7*05קQVU]aZJfĭ^u>T/Tݭ\]|]?7>]a.7 F`\MF"X MT޸iMSdGF 5vv`&Kn@~n}x߶.Ȼ'8MWMJ[~JboZRډgRpPD sJ, Ӌ,P-]JLbb2u99)Ė~0WޒnTfENͧv@&̗t:UtUj\,)DhPHsPXJL!0"~ ̹ VЫmZ/s϶#Y$F1( 0gsD9̴VJ"i Ј5 )u~\j_i^dNƿ+}ZР.YU`FRֲ+Scu&e!8-`Z({tw.F)7jf6k㙅^l,os|sKNߌ70/~Ppe}K ίJcFA/,3Ń86" "[-4#ҐI Bg\ &a, |*#f 5*Cz Y}QJ>$Hm)qָANj.KNݞ+-{ňQs&n/]wt$HA@Rw(^:IO>&=o=BΉ5-V`BĈFp<ej!,{)1d\PB7X(\W¥ %KuRx[K*.,㑅H>HԔ1тFQ4`pHp96&ΚȘ5=C .;3;Fhs0ÛZ6~xq10#j'%b0geqD঄j)(DJ Y ӘGAc,6ʌ ;QdRELܘ85xq:}'"ZI8Y)T Kܮn"G'`)3"R #9#Ge"$M|2ަ!D -*XKbbV[JLJN|~ޮ;8]h=DSnN<{Xβ UVnt*R=wJ0}qo#T1ǫ?e.JUpv+*YնGJPrL,~ˆY«+8F3%χ7~|SguZ̩b'H_>w\tDLvZ栱;FcP[͏19JU̫ʉO$ _}z$숬| /ʟ_Tĩf!P` ,9&VvLa0(%d  ~ge1'dWUPϣ>(y`l0g4tTt>Z/[V<?I$mrWچUcQen%9?mUC;+huV|Gr}4gXѦ|XٔO*}. -,C)$ySٔghW23.3uS Ǯ[wzVE,}et6x?3Sw\@)K^|AFdaTwqS50]o{& 3ݴUzY+X.uór;yC w)P[s)7R/w'}+{w X bȃ-;IϘFOg w`R]$OÌŌ2ةBtJO1f'oeғYz2KOf,=g5f[CxG 3D2*{IPPP=zZbN2*uPPP?룘@m&0GJP#g%o&( PkͫHc2u~&@1SgL3u~:?S獰#K:?G9ȓc:?SgL3u~:h9nR|\wan0U0a95qM$[ڂ- m|,Dc2UD|s9$b-W>'bmU3lf]E\VB;|֑͞<-N9owGbT|!~4RaX,`W0m0‚bEe*䓐= &PeBzY)FSFZ4x!pDN k$) ,t1g]ںZG8(yE卼lܛNgVpܯ6f=7=(W ~yz_Uyn.ղwǾk b} xwV͏*ž"Vt4D{ FY; ZDl ha[0 ^иʢj7;D@v0z>u#a*n bSS2t< Nmp ̂HYNXXD5}UNSVp/ #Ӓ=pb.=d!+"\Jg Qy.x%mFxVlZ՜5J5DWHÖ2#]JpB0P`PJ#r%ZJH1\ғ>jK p ^`A ƜFiؘ8-r/(-T->$\bknKxtz_;"bcf(j[D2r)sV"GCJJeP.5Y%? D lR!`V0/2B.Djؔ8-uLpy*Vq*V[e=&LUB8fӆď D9>XG ibfYoIe:0BB2g($^2"Yk ",iPAؔ6T̥;Cб㛬iYӲSZ Rϙ,ìdqJ9 \ \.JjzHl9_ \ EТylxNz_YT3xwHeNƫB f('Q)b ;!n%^| Z]%{N20)=u`-56j:7{6eeT?Ȏ"~@j*ʶrpVp2%,2jD9USS{wUaZ,(m +>X|:hM\TUS 9,>?.NMI~ ,V Z b8M"vR\ $SJI, GG`0Es,pfGj F7l[  N[lB*:tHp;`] tAT~$ 48Y Vo*| ŹH~l/f׫Ჯvd )AҌھh_ƭQp -nm;qLK7$O+4[]4ӎSu7?xu1=qDFt(93߇e ?6zIە$QQӺ߂Pd@bDBJnt7, >Z?9%pmJzlrc, U sI9ϯKB1ERqcgE ? Ou4xsݛ>zsM ~I;( K/5xanTjhz&41ŅSsr s׫]̯x=5 HLXOS28g!p$XldjR%{.ɦN;S@O< nKxafkg{c嵛N/oq˧9nIF!^y(:/y9۞BB8=K >f6}ѫGö;~>MX12`0^:DPS Ј5 );udLKHs.`W`o:9Pc\*AfxsS;=؍1O^D?[>AF{6ȽŔf\C}PAn*X3}ŌAx Gz ymȝf.ْ̊u&2E3qۈ vLܬ(nYRiײrpmn.Sڿ:,ЈTLj=%Fu,hZמ^w|hhojUn2<׭?]~e*fLسutG˟뎺˲dIyS%= uOS)M?̝nf.wjт/3; ҂|ڛ1gU\,Hf,O'",-ruĂ ڎyX&j^jZXzSՅI.eRvnaa=vLݼ|ЫKf-KZZ cެ[aqbk˸h:?`rzh~?>RLfj;f= ߡ^wF %o?5+iIS{QL5Խ P6e\da&=0RcaX5]0%Q:[`a>]/5U8WY݆E&uQDHүF/^,I~FZÅA )\pa..&q[gs^j h\bK^}"[ƽa Bŧt7uCE=KY ]tX4iǂ!6 kxiZì-%C5+h 4cХb_YV]7f KŲ1{ +@X"7*?w*a)xVWQ]i" ։mF?~χ+|oi ùuwX=J9I#yQle|'!pV+*Rp"i(@],F+٦K L︑% RA$#Ӛ o xX ăfpqߊ\ Ro]ghrT95dpQhN6z/hPw^^Qo0{aɄr܁-w`r܁-wʝr܁-Mr\;lY0`l[܁-w`rr܁-w`r܁-w`ز;l[;l[;lD0HMhB})M؊/MXJDacyeн %-ڤ2={Ԋ K7"NȾ'f%[Tke=5M,H`B<UXA1;cƌL"^ˈiDk45[!-U9^gV9EG-)B -Rkgas@ęRV׫p@[,@(vGEo_> ̂HB!Q ˽)q3t9ł 0IvL.{ɶJ3>yhOvxئJfn[xrZG8[/y~}:/YYeRթVNV2!rކ(0uT?U5ŀPYmjFd>ZSG S\-&E.x5sf"(5cgܭ{)O ;㌇Bu^Fަ%x-G͇aVv7 zf4 ?\ccj(j[ mD"eJdwhQ^I &v%? P ImR!`V0/2B.DXcwێK\;vEkƖ-;Շ i L!gDюtErta :l;Cn}XkI1y51k.Vap(QHLd$&`=x,@CjLJ#ֱ.-m{ϒ%m%m%jVQwf,bn6۸1 sOtcA9'A /{QϏW=A81kEz{<6b"Ϗ=*㧫5`|3;+uܗ^(}GG}d僎H/<'cOIAƼ5jy`6R%1'՞( v[zu+¥&LI ViǬQFYJy>%X{kY؂ϷU-^s ,w|xiХ@Fٲ6nDWvBzƛ̩\㻸&)b\Bd6x(rLSb! ,%62?:. _uxFU:tM¢'0R")9IE$s2JaQiƝVc  B' xϝ@X3TՒUe =M؟P GT)F% TϨ4*FxZ9w 62Bf'at4(Ӵ 4F@ʃmP)CUFv"$V٠48NfAV(s#35F.PHЍB"}:GԁFĢ/|#kⳔ}IYīչqm6?|h(/c,UTҔ[*s 9,>?.NMI~ ,V Z {1& D;))UjH B`0Es,pfGj F7l[  N[lBsTvt6t:9[~$ t6dAc/ P[;.sS+zE^_ͮWkÃ[= R}*9[ Z$uv4OK7'O+4[]4S^֩ߛO8Zun" Ud̙>,h-Kڮ$r\~HJ| (/B-YF*PkLTLV|\.焮0g麛S*Ao j˽A,Vi!e$>'.] 8Im>.ğ姛.*'h.6, 7\|UʥSSAOpZZfxiH[A` ? b~4VżXQ|B9I1BT'o I;q9t5^]&5Hyw7H!Ei^j2K\iGۆ0 ).Rcm^->b~m닩a8@"0`z✒ٻrcW6R v,xP"p@F |!܊beuKqp-ltͣYW%@^}R'5/<O7ޛy9Sz%T`FD) gSsi]/D6ĎM2t~/'Gٟr܊+LDh֮jЋsI6фZi8O nl#iHL(9 JTdwT,&+ ^kX/7.6eNg+֐Ȝ ~L]ag[l%m .(a7Q;䬃KI7=SS@(O4#xTz["i7o2&gƀ.v_Ir.<Ş8b>Q J*R7[g*tI7vicQ3aw_} ;~,LGQNۿyo-|>?/`^Fa52"A&%Rg.tRA%BD\hJNe9v10!1(D8X"q>̓gR l4ĔW'.ڥ灅ѱ %gsH<&F &aDrf&lZFcY?^CgaDݮ;*v-Vv\f_5v0#Ir e"(!i0`>9l<vhXb#VG#$KS@f(PmY[gY@sJj={U#6G)vE,dYj#>̴8B߃m$jfNreF18,ؐP&X)xʹ*F嫊XHoUJĮ Ӯ@I0vͶ - Vu }z9S^ "H<` 80V@/~ EC4D%q%RgU`"A*+}eh%VhBO}vmmcULpjG|wGL1| > <$ z:ĬW\4):1FޢJi 83%9Xĕ`qfZy H^ x&(.חw$T{?$KM ڑH\ Ot42Gָ-yVA1@nvb㍇x?4a sI3Ɨ çj/N֝F Ot Q =sSZ旫j~_旫jkzr't9/=^ {VK'tRjZ {N H1C Ha)L"I0&7b &9R"Kra)L"I0&$RD Ha)Ԛ%Xr,cɒZ,%KjɒZ,%KjɒZ,guv.VX> ga,,峰|ue(1|rRNB ga,,峰|b!&˜Rl gG%,$3}i,0=Qyu|[7%``=JYf  >řG<8a<3WyPU"çW4 z.h"_@ I*0I H$T{A1!pAґڮή%C,ZH:;vFfj0dUR"A2D.1 ,$u&P[RbhKT(ElRW0p9mŘA5}U0#9Fj(fĪx m _*,ˎs੒*%X\iWʤ {S+BhDQ25@q\"W 1`2zʺ RJVKO#0j/KY=:x处6B1cgՏ/[ө] mb.QCs@# :304u)xMe؃%Ih4+0*Ԯ"5n䮤%"qbfV(*^&0+"sxS*ĺxӽYyV?=:r`j/;I5 Wr=VuRU ܼ׹wsk]LYTDQ: 6!텱abHfW)`d͉Qtnd0e*޻ޏj U4y6[K-YqzËczqa3q\;7 Qi6C}twG֋H]~m6^޽{'!ӑs%ͬFW~R']c%<0~y}8MJwų~]u?~qt;"0{*lL] sow׽-3]puo:nLb/㆖8-qfhs3C#(mA>Q̻1jf;9[xsN6WJ,V|:0}6\c;5"_Ϗ?/???v? "̟-¿%QIMSV޴iM3f'CvQvkcg[n@}}?[|!,O~||{r'LbߏYUO۪T̘^B4b:Ҍ <7'q Yv7GhbK.!IAx 0d@*(hcǖrk!B]S{k%N Á:ru &V{#%Rb(Cb LL`8.ʡN+:U=>)|K :rMkx6-*Zta/uA7!c.ۊNkݑv@]EbpDpK(vf -͹Ř9{B9y^ FΛ{/i%>Se\NnGnRw0tr~&s9 z ?A2V"p@F |e&|(e^>i W.w q .}V`*h{aL [idjX'\d3_Iݗ6<)q.QԒR]FK/`moiɿZCM^WRj%P!,˰ G\p**輑IlP2ټ~_!Hoӧu*?>6G>ulI;;CY[x^[8Iڇ_½{hH*ZVENe(Ȫwow}V؞Ypib^B/k]4sf,y>\6(ʼnb#։z./oL Szݫm'oۄkKm6wvwݶ5.*23rʭJNo'͹ j8TF>Ln0.aD#4M00#0>7H4NJo-u.9^Oa:Q{E.rxMJm޳-ANjQ)x誮"o=cCCx`]s]Yg!@'}/7ohMv_SjV_sRY{8 {[;tkܶ^?NZ*XԎmB.TT (6fEFK &hY1bȉ༷փ7dKJ%AŏjqzrHڦL>.'F3VR]kٮ7T3^ ׅg )s֪,k^0ӻi >z:56VCI(\d , c,gK╧D֒%ĒN5"UB#|DYզ26 . W$ΔOK;#gQZ1.OEkw=x*4VVȐ$8R,: TjQdfb)C;ՇELXLYD#h8ڞTB٨)1{ٮ[~ RwԈ(:ֈ׈FBY9!E6[F *Y9`6Ae0 övKKYKbKHd Fem/~h ma?̺4"(t2&H:[}@Yߖm %,x!] 8؈h*C $kedF1UnUPBNZ=\Z#76 ZTό7\5\%o*Cz>JԗϜ1>hGOe|__~)oqٓ ]izfݚ[wknޭٻ5{)z61-DG3t`@QCh&Kƚd`Q&"u%}2AtJPngS)GcC0A .a,ӥƢ`r*ogCF_IoJ"?ߋ[Cf<~*-˶v{w$lk;®ofۦIÑ%=FgI"4^E3^Eo7~ T4F eK|9$AF'ʒ= &tk R)r>'Qv,QG)"pDdƊE."g:%+NiO^J8M>ǂ>IsF0`g&Fi!%R̩GT*0?`4! |e^4y0؞|<+Qs>O:D&ktA+\*:r2d]JowUmGhޠH>9^R^RYb,9k%`3/DDAm;5h%ʠ1( 梀BʾdBi HgG= /OKmvFΎ'džwdGsd;˭zSogW1;52yBjjagrZc &@1%#t+șX۪с.w&;zNGOޕ! l:a*Qt]S3rK8;x<+k|բ8M<9=Zv$GUҨ7Wׁ$ _go|E@(E%zqco9H>,&H6FE[dYKZewёyQ+jˋ(m /-^»ߺ{Dn$m:R H &R1JG󯣛ͷM~̖aj7䑵Hݴa/vP[^y.|7ye\Lr>iN o?y0C;UZ}B*H _?o'Jӵ0bֺ}6dž%/d"9?F)4ΧH/)JDۻ+njt[g{!`5%a [?[h8ΑV1/eq˯vDLb@FYRV?ONLف->+ҨDB"X˘**(-*'cIM I2k摆NnotPv`Fgod#oG11+k]AV+p=Mh3Ԏd8 Aa] P=gg6T=E*3_Eсphg:rqJcK!F NUH[Fd4/r$Uz dgKB]R]HQi*١\/\q'kQ4X`xKzeg#jP@.ALWW_)l;ͰL(Wt1<_?~0_Q(zXxLDזwXyy0x}s?-2} ia_t"Fl{,Je`[^.5b_IZvޖڠAĈlr3,{q/ܞ{p׽{ujEf٠qVcIzE8gnSC\^F+TliHx Zvq;~mcף8tScbƾ\eV9m2& |[6xaawu+9Xkdu:f>59^r_f_w'cw/ɧuVg,]~~;=tgSDp"@WgTјՕ3MP#fUEa&a(.f]Hw){!f ,7V*L)X+(v+rh0i'OZpb߄O>žBov9NIkBO<'ܗg7UNRZ*XԎmB.TT(!(6fEFK &hY1bȉ༷ݖ2dKJ%Q]7(UOϦL>.'FyZ3vFv͸EvՅ³҅JEnY$/>E>Xc`5TJES#, c,gK╧D֒%ĒnCIЈ?PIUm*c`0Y-l rELhٮxVSڝqǩhmv`7I @$b 5`ѩ@NV"3oK xߩ>,Xf"?-dFlEGUd"%EuCb Ed:[u1ٮ[~<\0hq5zkNz 0lb,dQBbd)F6r[X(گ9l;kW}/i@(:oyGugVmS?|tأ \džֶM͏Y׀mqy?bC zi7 8p^'𴄕-Tp6 ǪLY:<,;@ ư崨1P|7iGZ-CѨG 0;" 26:;G.ԔesIOm?>qUÑ&<:79-Fغ{~ ]->y}K=1Q^rk$ƣ%f_aG!< Y$?Ӻ',Ц<Wbǣr+knFo} [yK\*#9cocx8$%YTd uqmwnrՎJQfO٨eS >? Uފ2vs8`|cguJ;Ǐ;{w<.?\C.ˋ\\~%s;N0'C|CS8thC3' Ÿ49qQVlDn@}r}?-}v|:1V~Qj|s2_AogU9*~ZmM?狘rv꾫ʽS_g<sQ^-~7$f,23#WAD#,s)c<[eR6zZ UmmyE>}!0!Hnȥ {G RB~GfQ UL{T]N#:eߝLoS{a 2grirj-n /|?_\L1 "B pB m]JΝ/rkղͬ" I*EmQ0{xOu% ʣ`SD} h,*J5 @}#b"BN55(x F&@9<hBn}ž#g3L3q88_/՚_W# |PLgK_lcwۚ'oRқ¹AGYIϪ\҂Y띗:⥲1=2F2Fya zxw' P5mloACmK9/؅b+mIZ17wsI99.kXD8"s&SB2! 3F+Bcvb]e|X?2,!3?/J d㿷fz$TKG7M ~޽Vu.(a7Z[ g.^ŸNzx Ӈ-T,[ t:0/'H҈S8)b [HN`BeLXό$!JʘA&$SZD0*KM_'4FS2fN|rzV ?.TVphlq$J8-}l;qe "( +W&% sNR hFyrqTCPX4s]ω ,)E!"!$Izg b4ϕ79 I\4O~o9)Lγ3a@ܡbuNWօ{mWum -wCaGד|"۸6n uj֍t; A(fR'˽.XRH(30-NIe- 䴳FF `#VG#$%NHR 2U6W(M={w.ɹF5zEYIUL}*no4G)P@y,c:y-cD9j:M)&\#ڴJ"o1-M AXʜ \)ACShc"QM18o3yHF%`)GCPȬ Rs߃ 51v hrjboqTrRxg1޾ݨuK1jyȵ3`d;]!OB6Ÿu[׮擘k;}c&;L{k~?d対͑}ŞGMc_`I:.Kk57XlښOn=6;'ǧCinn~$ݜs& NBq-($W&1 n, ȱ6O9W]3,nٷ^}PCjk˵ F6lT_z++57/l[u$%T#Gd_⠇T|5H$ d# $# ioBN& рJ62-8RMK@Rg *u0 >l,X8/lZc#,І<[C yH)h! +ʫ j*er -yZpW)SeW UJ-׈+[cT}ڀZp $yA2WVH_3ղUsrj۫Z"bcYL AiC +YRhgI%Y nV-6R5L-vx2q[.x`ZnFJPWga1r)JY' O糕TWϢ2!s2JqZV>o54>Ѭ̣=1z}$7qTtx^y1vN%$ʉ\a"8jOAypB yg ΋pgj'^S"xqXPR `'Q#/k6 $,b*g36H Sq$"I!FPEB(49qݗ9ݠ/ 2sLILCe:Wg`UE~R] 9;UǛ0  0B*BǨ4B^" TdԻv3y7Iy- ={R_< Sn%r槪F Z#9dBuGUU @;>hS%#UJBM{9s˓6ui!Zd( h 8 UDZ*̺]Vy^kRA(WUywsmMLYUDQ ?q^[;Kd}9FE  '@쎜]-xzg۳3o-IwUƁDg#+'J3r[_V~˼bgk 1t,ߕ&>tluT-oSݻGGGvhIsM3U+r1 MmܣF0*ύr0Iθy61⯕Ճ)12Mgo~uk:AQlq{oNHHڑX=m0~U? 6:p*f)> ]>ٽMNQ)Y?tu̕2q#y`N>6 :?vju*qO>?5n8p?|//sq{8? ϹjHL¯;j /MySCxӡ[ h,rk}>F=[)]?1c^ nr^'_8=JͱK|WA(;a$v-;~EEOT1c |SΑz|P]@%&!դB^-&o\I Xdٯ7"WAD#,s)c<[eR6zZ UmmyE>}~gzɍ_1)1$?,fd٣x{p)Z$%K`lK]uAm둇: Md ? 0:xtUDV st1i x!C":uxx#vk<4P' aҡio9M_zȴ3/ .2MȊ0pV.]o|qRsQIIpo+b'jE]ŧ.}p K)Dž瞫/#!QK>B1Br ZE<4$sy7au#\Iq3cޚfE/{ ˗,o#|9|ͷV.ȯ W}ܸe*@JQ4 Ct_GI雺w}MLFC`A.xHѷ D-T]v>67AH!O]<[sľMD<՞Jp$:(۳9!7}Zzm%x8Y{4&eu⿗hIC1`WQF1>I!aq5wPj/@ME.nk_>_"d 7>"JσRL DLKda:z-6g-.P>[W~S?=䦗rsSC&mdS@3n.<ֈu-p5+vowPV'w`E2|Y!WgA(Y(vv2a`Zf'}DdRKO ." Y:.tFU硣AImMMnD1JY ώA5iZL 15rIhZszlRs}VĶvIE~4Y4ALس vǜQ<ӎ -q{fy,TZ1֙d@_Ǣ£y&n\4P>  oI'~73`Jj9!S(7 <\f#9p Nu͙!7ߧF`xF`p#0aSuY-RGІ^^A 2pfԜ7r?iףɢsۈڣxxu9lp`t+]S?_u.F%3w1"ʤp[*D y:Qr4,wVz=S҆wUBs8Ѐ.&[Ux/"xÄM[2sIyVh(C,(Jb6 fléhWttPHdzmgGehΤ 5hPS>3ohk؞'ADpǠSxfHI.$"PJ6 HJ c 1{&mǸRG5 'x@uȼ7$6>WGzCΗcgSQb6ͨMh\uW +3ޡDѶ nsF 848s58(N3D8(ZLxLKT%QF xJHB/ hL!|錧@%X&Zg\g}NCASaYore˟|շw_ *4nz Uol"Z.gԺ1^=^UZ I bf#\LL$}ITrI g+Uu՜u#TD{Y PnѾC>OJM}R*{Ozg9MY.C8EQxt )j)$38+ $%@4x .(- $"PB.$tcc`\W֏ԹxO'>qDudMC1T/81?vNԉi@AkMXgt! R %k([L. ZQ^Hh|j{cAxg˨Bdjtb}Ж`Ie9IR'$"׫g-3[a*AS`&(e԰ud{$&^|CjB"l)&9TGϼa:7g6 _">bɹ\j4rʣa~`$cZ,huޫ`F@, VJM[cb#0R3Xdl!)}LхyŎ`pzq -Ax*䴛8~rTs_ gUw 0!RӎͯKHOӎ\ٜ|Cl fuYkC g|WgMpc>ۀ[Of.̟hBljx-4]~l;69hi \Kwߘ8|j[6n{V ZKc; 3!-9]crqw_l$G]L%UheW}ʛ)M y0 ,xN<#10%I Z D8Yر ][*a[߽wo;!߯,('{/Fry8B*Cj3om~&}u `o5ls&F"яJ(gUøD4Tn!EV NBebEdL899#<8 Mr6{|6ĉ_`s|aqS:@3A*m;T+-DbSZ5RJ ZMlK 3L ѳB4c=IuƤLNCF4TAN 9JU(Ƹ :狜=U&gy܀`Z$b]A[.JYŜ` D1Υ1Ǎ.ẑ}#aX;=E晵ueC2Q`Vt 7RnNvH;타l ~>OF9Xnnh۴U{=߃dnA0<`nxփvkw>L4n:X֫ɦtQҰy ?Bֶ2.zQչlr}r!Am6Q` $e48ڟ4-Zwܘ$c{E9ǚM={97]0t؊v.<1IyM}:nPj?{/S-lk넷1js61uE6cB s^`) Vu&:ױkhIt<% t[(pR 6 mF#X )tIK.VDOrU 8mIq(h.RМ9zqe͝jo[лZ-V7ECSnQ{5aSuUGnx XePT2T{&)/L߾\|qCK꫞No ;)swQIIpo+0Ϛ!|=@kF@-y]n>0C)ӟ/A|:Fb6FD{K! V'SnJgXzλ!Sh9}Gqd5q4{y`u*aBs&קk9SyLx3}]&QBHLBt (DQn4#iM v_DM.hB4~fS_b|rӬ ;̦͘OMëǷ<yr֜;G^[{8;³/߿BZyOD}&>UQυQ1u`&i%*]ko7+|]w}nSmQlR ^m5H&pF7_FR-6i=!pȇ!%)0klBr 'z uHƱ`#NŠR)lBj[g܋+Yʓ`ak#c[, w\`Q w}vOί ~Ɠo)AJ xԠX;(}4Yn!`Ir>Emf?E`6^ DP#`& R[g=b ,)ڭCAmPc+:Y`J!@ %q8kpLRdH4ʓZ"oZEbmxDl5ddT9ήB$^+$QxJuw[g=VJ#5>DDAZFD!bFKAVyk KNJpFi)$nQ NMRKg\@z"8xQ{lVSmoM >/̓:[%"m8Ń㨏.0)8Ԏ>_:Ճ+k :U}l}J=R P*&I)S ,2) 5X"7>mۼу3KVK1Ӓhu=5su+!TP8a\zFCQF%:Jdye Ty׮ Qs LjMm1q;0^8 24W{)V[gC۾8?́To䜟%V:aJÁz^D+M1M(sl,S^;r+ 4>ŘAfRh0A-%hL6`t>V'iI Rl|i&b Gs0q<*2%O^h`Zx4PaE#7)8#ʼny> X (qmRy1~`_eYλt9+_c,Y`y3ъR^?p׽~w8;dtP*ycDQ):"'ZqN?qt$M97/(QqzB1H퉆ɹ?Ty$Oz?6PX.xS\ Dg{9mZ_Bgg˫K%(rlQ)lrVW;%2QtE}XШnLŭw.y^ySzr1j0^'_z_}͢o6pJ ߼sqRВS=ؒt knF7`y6'܋y7/]&^xcnuM6+e80 K>|za*2lZ #qRRcޠ]?!}7?~?{~ >W޾W?aLgDD,w~ټi&jۛ5Al4gm7hW6_E2%@/~ }v*T'?ijF˄+ aΰ3WJݕ{2T}>qTP Ixo!}mmz~KVgW%i'y$ y$w#-skc<ϗ-*i<:%wfWV߇v !? tޡȨR4$"7g?*& {T]ݡN+:.15$m9\2\.G;SS)b}j7dr!ӝM~i엀JN=KnvX u vרG~z=w8f7)%] ΋J2EPZ)\<[̍^2tyqV"nj*#7QE[!!{hN޶i.[/GxNww q7_[ d6Ln^gů]ԕMV}ݢ#g&ף3lxOEI>.]5jn8|zRh,nPz57R __wOO3)mrӻJ RC/</2 JcJctl锐 FiU! mme*MC=d߷mnUB (#,wI"s2rÕ _Y(DHC„|.̓> IRe|%iMmP۷"=J' >'r{O<7(|Ly4n].24e+;;]M E (hi7dYo9mLP8kT4*c, Ho"u <2k͘<8'mǿlM3A p8IMƬ{G;oxPk;=SԑN|"R՞8_~bZ=MQ\U’Q9$o%tc9vj`0FE ɧd1Ex[\E˩̦DAZʝ BkBso XJ{-h~32*y]]R[@ye3Z]u#OP Sx͇?Xe]zu؝JeDսYU%K,9y^͋+GoPR!+Y&c~tQ5OunO WМ.W;3Q[yT] t' \innDl]a?μi"y!My>Txș1ˈZ@)<1Δ9;~'+nK_iq`md=S]'JM^$ۨrjKJQ!;D8QY".n hR:E#-iC^󭑻E,f=Byc *>_A1ryQkBIQ~;>|o`yeoku!Qx{= oLMybAt$1Sẇ'ki-4lwTh#rT9Gur,YZQ=K9?KGuf(3  \eqA \C,%\=CZpX˻i\P.gū%6Y1y@c L3‹!Swh8Mq - >#8gNʍZ<{!1JXiWiV(%H3ia@(zDpKƏ \eiu?w/^ۉ;wcI0%7 i w""RA+M>c=lAOA "H塹 z8.[l߬ٶv+Hx`<Ĝ"Y_Yg!@ ɥ"pfrRk7ZI&XA'6ؠ6\'P@I 1I=v# Ph2g,հ7.'fq1)*u^kI5Z(OmT)yHMp*!ڧ.OV+y(!g c"Ð3Yv<Ms|!i ,."t>*¯OTC$ϩ s1 ͩM2PKl!cJxQVCc,ha)W5dh0!5 nKJop0Rts(֩;sYMo5OfPl46Z?г3qI%N wJhH9wF|cl ;_oʗ@?u-1xU-[UU^ pn7yib1'bZ֙V $@cGJE)DSUH})8+ UѤT].FhN6r %NkTADxy&W]V!BNV^n$z'$>BD5ӼO>:DŬ?yo<lrQM)!ʗTD'ւx~x 3ofޕ$Be0!x=N7YBF)(ՋYU<$xCR:QY_DƁMԆN'Nb$UA\ J= .L >yb+n[NVA-m=3{ 4fƙY3|DY=PNؚ|R3yPRt@ՇfZ' ë 3I_RD~9I.\K L:CaJޥN{abnB$ 1)PTګB f('Q)bQIdz۩1y0UͰr>&{{Q&t]'=ĨH SjOTM ]T :|+:T8[S EOj T`X!%_Z<0.飍;'.as/ԴӉY~|n|_6>hB17 ..빥z6sF`8n\9|7~gĈh#ݴ Cڇ4X>0V f0bŇd٘W7Weh-um<+$t2 9, G.u}W08'7Og=NN;Է{@O??~:sLxq30.ۚ߶"-x2?Єƛ M[chJr>u3kPGn)h9IS9r+Kaqxs᪃t>>5#NaOGl`itQQTQgkJ(%ĔS9Ҏ WلLgW-wkhKhb<[rƕ$Rr@K hv,RkGeCN)αꔤ}[&2c 4gPpPDX sJ,( Ӌ,P-RP1G΁bua|wmАmM<04 x.$im F3ms:z!-nֻड़.Z|fg 3-` K)FDoAs mgN>j[i{1Q`d YQ3y2c^FԀR!eV%g2DO"1^a 1Y,-,xGIݸY @&Gw\ym1EzrHA7Osno/S?xxz=ī$ eI7 +Scu&6N ؇c,۠cwQk+T`D#TZPf㺃No4}[7\fq?H⭑5 Wlb02H`Wo "#I>L rH# be}0z)#"b1h#2&"1rn z6C(7.ŏӿ˭v]ݎ'햻^_%fN {M{4+~76ke9YM7G7;;6$-#揸Pl'sW>ݓN[b̦Ir^ &Q-ԴZWtmzRz"m}sdv Xu 6DDtҡ碯yOځt3 (E݉7$m qgAQN{@XSF߄sHAY( *-8#5C1oj!VS6lkPhl(T\UoEsr8v AHRN0kA `e4|?hj4BZ"9.mPU(kD',(AZ\gaRȥie KZ"~@m4CRT `]H'wo> HB9UaaAE)YUFj o1FRcQ[窭2> +o]Ϳ'&Os贰˕RUOJ]NvJU*NEd\ Q`H+l)3'S őkE47P2zE&=)Q[M@] jRAH喌Y-$4ε2+ U' _,Rm|=2}}+h5V;Р@H]t8jWR*u1HY%6U4@3HTH'K!#a:0- Y"Y-pcVDZHmI.=j$Jc ՊºVp"i\rIM]*#Ҁ 3Т D91zX"JuZf#g-8.Uo?}]=H:n/p)?E+s#X8Gp7 [QKvLl;*1EE@(K`i\Y? g +׶wg߻4nppr h| m0H{8im'y:-8½Y؋*Hs~/U~_^YsJN^-3Ѡ O:rnT׀ zq{z҃ػލp GL=SjkNg>Ppe_sX,I V0΋+y$6hoЌHC&89 3dPdK.9Aa!,Kj5(>HPÄ -RZjgS%֤“\>ɑ&0t͗i;e@cJҝ#&a?LQZ.5|@Ņ9bx ,ETbrHx {O^Giօre}eH *8ʝ7\ Ø8P=x:FY^1Tk]w,׺^Av(־'CB{$MZL``3mozz\ahsC80}0*X/3:/aN(E f-0y[e]pڗKfQP?,*:.h p n[ &N#K샑qǀAp 8M(rn֭#"( -J) 5)ArA"g:&3Zi7>5 ^hdZ v@G"-K1v&W =M[~u!D #C Tc`-,.EK~(2!<b EH Nr"X nNj&g㲩GSmӥ+'Gfቦ_6g[]0&$=T;)9+,C <Fiuwh5Z`(3*h2:ڀmT4KIw xC;Wǣp[f^,>!esTGG(XN+YfQ>נ5f2<6|U 7슮]xΈ\JZ].5QWܚVWJV \LoyI!Ժ[g,^e|Rowlz`Ϧ7Zc_NŪ9yIve! hu@@wg(: 9usnNO37#oH>lfJt@ͨT TWYmYL\H}IRn·O1E#S5R"A,ht~~Tx;7c) /?ޞ''oF)?7$LH3osA#Ι5*׆ג)E 0c>O =x7r,ɧal`͘%W% \qjN`k*2קm$͎VձR:n8Z1ZgU>X07/-s&%K`_xճlxVϲM1[=ghYmn\ e GV2#[ZrkUISlI"Dm*Mʲ=}“dJMȮibd ]KJX1OyvT0!R;_` ;t%See5xUy F߸w|o\^3vXCGV͓RbJfAt6OsgNz&o^|ˉj: Ӊ|X\(|$) ͚Xt5:u`-eP| 3҅31W(mHh !bՑBY:z .h LyT&hѭ;]owrY#iB\:t"+ J-3mw"i'84Z&"a3sѪqc+LaAJ w S,0vei%Dݧl5%UƱ@eU.$3my|? 0 %E:]uB <0WGXv7 fR~w nb1y4a8NV0#)`-1'Hk#|˷H}^׈h=aD㎨o7&* 饉K^[F)}bsM]8 X"9\So8]ǻ#gnGA݂Y@(FXhnCUнVJL+b~,*A>蒠WOQ\I% vWTP..lԤmβƉO5A1 ? [l2VM=`sSs\Bg )9J66!762n)'Jr 3vy:3JW헛M˄jf`a>eQj1 &2@.Q0ì*2rI@>>Us72;Uz휳]Ȕ3Hd)01AyO]6%fjKsU}u)ԫz"pW&͈UVīG%9O.ٱxr Zy\Rޓ{gbvD*|D> \q4w$h?tqԲWOP\B>ꘒ`Aш.?o1WiRNH`)_W,PO~4SXTF$h'L u 3.:$PB3.dDr.GNcR^6cd?,Fq.Res`1%Fu+Zčp*P`dG1z z3#)Yqx2P/% |'RLR(wPx6~,+(oųb϶@+*i{fOZCA'SHxkM6C_it`j::[}M]MC5V*PK=t8|8= ηdT6X㨁pV,0.-y"CaWQ'J;rF`HjC[,vGEo_> T)JQ;g3IY{|j/&ف?K[wNӁ{<: 8PQT%kuu AMwjK p ^`A ƜBk8%cwJg3V^,Skz, U[V(*.yU O鞩o\bcb(j[8TDʜ%PRAjgMTbSE#K8{D-@ؤB:`^e) Ӂi]Iձ ێGb:Ej^jªf``X(еc8xL#璣HohŘNaBpa`2 hG"9AQN2 0ȁS"0;,K~1L&`<}l)X"^"q虢gLo((6JPD`v[B+ |: $ghP΁'blFbA%bgl.,XJR3*V.nb-g{U-lp*d06$0w(riH;m!XD<+ly_ŗbJ1xTuIp}yucʰ(˼Ί}TsE5Bn; 091wO$18rA8ʙ"t!}&4Oh &4b~ Q }> ͨ䒚Vβ-RWy/Fґ(ޮ~gYثRf?jRcu0ZԼU s1ʝHpd`̃6 f#U2p#qX\jC Q0!S&qZ]*qdZb8z|Kg5mH \0_b}x;zdmi۸eӣ;M@+". Fq8pK-nAl4voa`r$ƗtjzzBMbyLh\ʧqU+ihev`nr&js%dZݓ040xR޿{ST8S|q?wu*&T F /~}ûw?|o^hUM M;MXWCxb-Yo1.C]NyŸGqKgv*d Jf|a.]~ViVk6nNj^]P}*-"oFL1#۲ebMZK|t{NޫT[wOI!Ei^j2K!LDic.8Ǫw;0yy>b|%@ sJ,8 F.?bb2otSuE[am_xaa> ka)k]g_}+r|9\!rAn)1H+h+dlg!MJfgyf߳VLƠ (F/fsD3}^:ДDPSЈ5 &,Y{&CDTP)N8%b"XZY8ұ8$ael5>.$A>;ZKE΅suU:_=v{A ī$ M7zrRqjNXbX遉R ۷SwZϦ @ddC|~Y,bKf}7_ɿ!D27f;9m R~.~5m=ILW?Dk>\lJa^iGjdrVkwE%|–>l&{hם)rYt~~dq5__F`C_ tpe.(B?dݷɥwZ7ъl}&N}$L&sf-OÎOσsj`J5Cj*MdR10ܤjS`kWөMQtzUJmFlp%vMtGp|qb[k\~=Rٽ/KP }&K0Y1 GyRGaX΄K*g瑅H>HԔ1тFQ4`pHZW-,Hԝ o.]g3QMYs)&yPri^Xnu{ *4dY7=$m|| )Wx;]G[[Zb}7 i^8q=_>_wϙK8g̞1c.x=ӯX=S<2S DIw nP;InA*ky`7 dGڶ xnt xc>L{)wHRn!Ke|[_t4_l@RI-tFG(1PAGs{ D:i;4f3df:ng'S+TLj;6="UIqhh.L;ݚJ e:魯mf&^foƊ*1oGgz#q_1y-J006ϋN]vڕiVbRy|aU@pRPG`oYP\HްRk$ j)>zK(J[j!Xg`,͏ xi=l(ʼne k = j*=UIM Q‚R C(d1TBboˁdVIcLE^){K~>8; >0H šՆ~8ҭ'xm'r碼;vq܁9Ƚ#^&fy溱s=8D4J%30d`QtH-CDx(}pѬXo#,[ 9*' DOU)NJIъYURI~30լbפWlHvmiFzw, 9avX;; #apPv#ζy\t>m#A:" ?"`')\^k˽{/b1.Sѐ6(̷vI$ʒ= Mw >*9u^7SDE`_Zc*-} }=},6J8ІӋ`>>R͡υ , דJ~6A"F5md %t.àhFhAC!Zt>9)mu"y0(K*a3ɸ)DKn[mgo]8)|c4@$^YK@ KU 25ŝ6gE:(THч@Aұ '8Hw6gGqa߃8~W&z .H:wW߷$,r)g&3:olo2*{חW>dnty F`UW ೛bX*Lѳb~E놼?QѬG3[nQ?Z|Pn Z<{'=\vxbc2&M1\St*aEO$)y~kFPxB _d!vm181/3C°2*&b5cfqںl28i++|[ '-W.N`yi[h\RXc}ľI f@,z >X$ fZ[upeБx0Z;ulD:6ͱRZww]Ч7NC[bfP?8V32!>[xgI%-vU0hA)dvPy}IQ"tؘOOyJxFy˴fڽr.tt$Iw{Tێ66GZiȪ _?@*3`8 y`a`ȒC8FVL`iThlrB"X˘**\tކeĤrsD5b =ܳnmmp 5Ð#0WCi@G-!&1 C̹W|r6~_臋8pҀ끄`({LǙ(ƍ3_$~ň(p&ykLF'AVѤ!{4*%]"G C/bL*^RlN{ CH$;d tz)DG2JW88;TY|6Z;s?ǽ.( I_ vγ5P>ZW U)Q^4@hb/؋&hBG?t؈T'xwҿNI:_'uҿNIk,N, ЁI%xXҿgI< ߳$"ga'{7StoRǻWgoKF=   A2D2~O7B&1ymq$=AV:] h,*9$F+"gUI=ϷMR|ן_}foTXFzw,a{X =yf9 b.=߂j Sz^D:\^+νv {@:b#@? XJ2-l #d5'J[$ә{ٙGÌg<N'f<ާ v.=vs 0枮|Nr$b4YFX[HG2d]JA=ԍ4΢+ FyKB$FeјXqSDAm"D2d3tE]0.JJS@eNepI/#fP7 >Wo WӳK!?a251 tNpͫ|fcӃ󌄤h1PVm)tm] L&Cge;O%dR ,RȆD[׺i%v ~g=1ue7_=kS8>"k_=yJ=$hb/؋&hb/SՋ&hb/؋&ըM:֋&hb/8@hThbOE{^4ME{ď% *ogUc٨R*y6> XmQ=ˠg,^x5^{^{77{ *x Ȧ4(*TN, e#*4 _1(gA>3_EJ?š l RUٜfUw=s] M ot^&5訣Dd1$P ZXSؤ @V>""Թ0tp6hhd1 T5_8WA !Y%DA 2`hjbQ&6:ƆJ%. 1d>>An&;{ <6Lx}Ng^Taƃ[l/B^ZqjuL5Iqz^k[ @1>SZdS)j3dsRhe Y얧幜^>֚nyޛMd Yʨt.'F=)|6Ik1$ŭm(` nyX}ߓ&"Ue='gC8 в}3nvSe5 p&n6s} ]<{wHr9ro^xv)7jʂ ##\9#2)xLBFmA#C%M'66Z˜5m58gYg뿽0EJlRBP$oXoGYDn s`')>zK(JFѫ30ao~%{U=?GaWs^54|_@lVaRzSڥ2ؾ9>^|Z ` A׺A ,  T֌^9^/k<&b5"b5 b)t%Y$CJ2S4~* P[ + sI׶̶ltNrf*C @ z&TIK<`l1m&Ύy Mˀc{]oGW،`Nv\qO+IʶrgH4$M$'홞ׯU\eHe Y 9ıVG͹aP:t2;?OD; EBn&o!ZGp `5P"RhFm58[%Ĺ)uZR 7n&(o䤲dMhVVTInЇ(7ZE \zf'qJ?{OgP23,88speAQܕFᓇdA+-IΧw$L3kž 0$ L*/<(jJ(m&r#SNn;r[Gl7s_Pձ/;8ؕE,3֎ I1KB(A()m7@RYa3Rg xI#^" H!(TGu5ra%ϼ,20幭%".mM"Z.Y[Է\T1d?L, LNP@f 8Qp&I,(NS=ɗq%:6{|jp ..}A0ӁʄWH0D.1@7Q/3j)$2~ID:τB)1$0p8sx1& wAH69zkyfJB7dJzģZy=x={mn:<Ǜ0eô,hW=hyƧw1 ,aڒbQ#xZe-XHDndI񅵎J߿g;Fv2fNc2xdJ xȘ'P霘K7Hy=܎킒 ?_g7~4ĩll[lj67;X q0 G0b>ŇlǷm.{UӬUJV:dS },Vg{$7,pc8q,Ȱq GP kſ_YU5G]YoP.|_NxoNN>}߽/8P8j A3 D?yӔ47kiAӌo.'mvǨv+aGnP| |x3Vlp=51L.$e%)gkVQQT1cnמ h!bsi*TB&f?4\@*zPp %M<2|r@GquZ1UĖ\r9ۇ ִL V;2tvhzIKy0Ok+?~~8 yLTqWMzU=)y|Ύ_'>9^G7,ԣe_s v#W'_~'jW9f7*-xᾨ(EɪEZ\"g+GW'#ҫ>1r}]⻇vNs8S|X)l-sav:)~U_dQtlfoڏ\|O/?u]$=bܭXܠ46'j4ˊ߯oLJݺgOO??/cEp&Re1jK *[4"t1'p08󷕧6 q瞵X1I\-A5S/呱`SZMYCvo0H∉~9mk19, oӢ4f-|7\Ot츣AGYI\u,GA[(-yrwV&HA1rtbe9*q+oY?A5)ONNlsgۛ,1My/f% 2Cme`'j +p.1R+Ҏz&JX`…nŤ nB*2Z2ڷ0vQ dB~(R.>'xy;0 "N;!_ 'uрQ~#JoLfg!`_ooH``:kġ4Rk;Lg*`´P? Br!WZ2vpJj¥9 BHmU&WC+Vp<_$\)3U{7Z.A'*lm?՜ǯv HIvgR OU_U7>7>.~u<~n L9Z$3bLAeK8;]O\x;8Wa?t=>Xd [9xS R0Ksѥq}=3uCR;XCiO[`pP^ ⽌°12\S<szR=N7i޶i޶GJ$-s9dĔ`A$}Nr(\7yUޮVy5rfnOTDq8s/gE3̽hi3W81'Y $mӊGR29* T2řx0d#)@@G3T:yz)DAƣg`Jcٌ8s].r~vsFz2iQi두U)|TmU|0רubUNI+Z^5=&YNxnqɅ2a #:U,1Ykev >:G;;n;3HPD*e^,"PJN PaR@!M% p'Sd"r&"?&/$uBt+rv8bCQPGD ~΍cYQ;Mǫݖ1w/q)zi=C-1( Dk)drYSF2|60vNh+X@eci*+„4Im-/b,y!Pr'6٤, CJr"acYiFv $)KNZ%ᝓJj PkX Y^@uF] n 6y  ݆PW +%dul2!QjAl8/:Qro]M4dK!D`iVQ"e"hM44/y: Cڛ-Э~mh:q 8|ā<\ .k8j;LܣdSR#R:'WR\-?6Ux]&ԢC]H2uI/%Ř8{ xJmE$XfϛrW/c_rcFɛmv\0kiWPBj3lӘ=$o ぢmVW(&Kk*-I豐Cʐ= B6JW hd' Xy\S=RZXTp6 ǪLY:=,m(Eamyy`9׵ E:Y1@@2-I +kW G i@X-\ m,m()(FL-FC^ڤ )%u2wUxt >RWg>yuY>9/'Um;89_$k{'uMj'1h|G O9B[HL?D'×ji^o"|J(ZugQ\o)ɏ7IH>]ԺX}ǯwEOkޓOWlE\h4;)zz*5ųԫ?@lG>U~m>kjK)XG7JtoHJ: R*dywsQՏDSIH5`$a^|8F bmaϠ,MH2F9c؂Y9)Ą$bd%:M:h "czb}9K6%zdwOw;Nf!;[2hTKkg(F;`Ⱦ!R!Ab\9N5Ef١sVckچN׃mt9j{pmD11z}}f26~el: C*6 6M3 hY cKCI ٛ.Ra(Q:nFGԻ~4ZaWAh2/ aQ0R P0D/Be1RJ3 %:%ʠ1*Uu\PHS(M('0QEf<4]L7c>(ԛ1?^W?]w˨?xxns~'a:˅ Xc:Jd3d1$5ؤ əm|A#G/PU/tڬvqD%QJrZ"ϭk@[P[xjE5Nߊ>37X3-;lo K˯[Zd/tѭpiZ`f̌yݳlǜ7<ߝb:Qr3'.&-h9j7{{8옌Ǥ rzo`ST0W,);mD__nK-ҤNxs'm$A1"sʤ2 tB( ¦'6k⚍{ o@w1-.3E K5,(KCD6'jEɃk*;ZxcϢZqQ-f q#.~[nJ5!@\ҒU*y5W+Aj֚cZtCշqlQ/.t>-Ï{̺S֫:D%1-:UEyo3u:P!E -lUjaNc4f<0 ^}L.K)Z= GZ$<8(( `Q5,r ,VLՕ MPcI3Z[.+b { 3VZ!SVPl,-S5)[yZÿ~Z ^\STj̧W}-Pwyo,cwN<@?SX{K%_VWUvlr1[~u Yd- &hY1bȉ༷m% GHV=9R$mSqA&Y S|i +֚95c;ҮkG+WҠ _.\STm~Ybew,dz1olklPS)%iY Y*:+O%/J%[/M5"UB#6|B&AVdKm'.3%"RO٭$mǢqDZhm`=!4vNVȐ$8R,: TjQdfb)C߶Iz6yaKW)hd \G S("&9aoԯUx4eZǞqyFFtF4}JBRɛ}]+d4bPL٦-#@F) K8!EF *YꙐԜl6n-`!ٌU֋U1:qɾz}ςz:1] A0 * L-5PA `koX8xЋKx4Vܱ>!A\{No3ڋck谶%fggƟL|zzNShuӼE#3eΈsv@wF*NA ܰQ‚ҥCi,NfCјUB6udj#]Sa| ۖ7}b&Mo)o[O\:ͼShB(Ki\5m'ۺ9@,TyM.Χ ]KoI+\GDFl{i1X-XԒ]3y=\rcmMx]\^&5!cS#{7o;sPl<'=;qg}YMFf֢;^H4ouG?AzRg+]tk9R: p]vҐbL.D랂~]߭JI|e %7R4_$hJ`bU=m5fmmƜ:kgN:xs3 8Q#68_1US6:_Kx1?C+oK/TڜOe~'Y`u2=zy|\MTήrbdHTBץFʭ`yA(4LG$QL*b٨jk*UvlZ~: x ܔՈf$[#3!nT>-`d\&8mվ(-c{|œdyUy1D9(2˫z؋~xuen/އ?|]ԅņ,E%RĶ:Yck4>y\nI;X[)W.c} SEjYMGv2qv{FMYpݫ?57)]yʼnY4$=٢%Lnh]nhCPF9+s1yc45/M61N9C[hu[) )CD>DG`7V߷1 [#[EMnmubwn㍿^_=,~{s~wlg2gi67ĕힿm;_櫭p{yn^Թ>6ױksu7^qj3Zu'Lvmےpi܉HCs;q4cغ׳QwTE,^ |Xɋ A$(ڈx4jsEDK6FrqW4[`$x%ljvP1t/MIVOWGMY$Z!X*yu\sErV9+eoX<F6)E}+pT$V*SG58{ )'.l 3U $J>}yݗJ^[+ʹ:.Jx6!ۗTY|ڷLFB >Ô7^ZyFF@%~0;echtL]#6vCUja?U({ _˩SVn)ʭ/Vn%k^KJH?]J:V2cz{W)-'\9-Õt`Oزz6@z=zHPOr5{N6rFn~^rӮ_]ODsTgNuG'#`J|wj-{y/*lo4Y2[6<(8_w]8zNn})S{>9d|]j=Drt=4}#Syvs>tǓ5/oޱ}1Gk VD$R gUaΎQ\R(6^ǟ$ # =~v}_b8V\4P%n HT}KIT )⭣HYK`RL}$*1[bQm KL\݁jԤ//ȇ>n}5AV}ihf].6U`7w~skM&TKƘ#A!fTZ ^+M)l7J zo5|9 ՘-ƜK2Ic*qȚm_U(pR-$Lc{.c83-a.@76eZI@t)SVՇSI?{;,ZØ)v/YR-#CE`6$NE 6YY2d ChtJ S yObUlciKˡI$q0FںyxtB_Xn "кB?ƫ}UǠ^Ls@ %$$0#LKqD) e@[y'}CAb%dsTc"}3:<"qZ YJ:h'ŌLh59`ZZ"`:=(@"(Y^c2.,L$&0",0n**sUB@t"b)3КV2{ 0]{tRBc `f9v R)4WoEUĽ"ĶQa-!MJw+ aQ6hȀ Ղ}&(.{T1ĥ Bsti\Xzab^b$r_@],": TݸiD3 5̆K`!ض7{G#rZc6]kA#yHY- 0]" Dyw'ED5@%T2pyZ6;gh:%\km+9/{~? ì3HEf61~ZSFo%)Q2ղ)uVף)X2P D'pl=a5d}m&IB~Yp"\x>W O.Bb&r ַn:@3V0N'B`Z T( y&#>j , aanAe~ 2châHY{8!ǀSZ])H O 5XMDpkH>9,% H;%T5%i\4'02˷Ϭ|=(Ws _J x` TfrG @Xkȴ oA@a}33A: fY CHVBl8?Sw-X*,?g"U9ml!& *V5l:"]v)&Y0I-bJ26DK#TcS"[c`@u Xg;jh_& ЊVf+ )Ǝq(f\؄V.TH+@ڐ!5)phX9uqvp h MʲdZ|j`[Q;j WG @AP~AͯbZf*J`8Clh},Z=E4 DZ3!@V+8S -91f}lНg&u\/!,)-n+viA=mvDJQ>Ȃ*q,,P54Fl$69U^\۴űjô8]>dx /Vܮ n6/:dKggv>UY7ȬlKf};IWA _Io8#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9#9%Gr֮r A\+ ru 'D{j\}qzc-jIuFh1.f;]AvSs$ߓC0 x~6t+F]g%4_yfn͸39kk A{Ms%%ѹآT`F#$Jg2dBauibS}k qُG[ PqdWla[}~\!]#P2{w1\Ͳӝ\Ω}>'Gtg%^| XCkUY@>lx=kkعB]B!L"];u?b} !̃);G$%xG؀Kܹl4mfRi"\CRx!c^j *XY'$KIrgQ,AHÙ'K Bbry>}FX>&c5. Ѹ l{Jpxu,XDT+a-Px: p/w5GZG> BNS[-X}py5/?4Û|_Fwl5] [ (^'y $z3HV=^T!nGg8$E'rNBߗ8tt$}!1Vb(xE[Ĭ+͟`@ 5}5L(>erɣ3q8T8}_>.`Dk_+ GGuT9#N$= ޣI}$~3pM&}3L֣Es<[u_|wW^fAĜ<6bn۵Mt6|` z w=2blHJ#)ymÈ,?¢e: "zti5͘|9uTvzmnyV9XQ{5+52~wo~x?½}߯wq~30/ 4SGpo?Ç0jhS/'*򑷌{x{3̎ӵoFiøO3׳IYT\,xr >)_a/΅(-]7`Bx1 ]&1<5\A$̇ `M̹~M38jL+ـK\lY qx$ {&N55L=0Li=?ca3>/YhAهp>vmjs?l$pC]hiR )&|;Cvxo}mAt5B2Џ?!+?CE]QIVQ ûHTR[!ՅdDY26D,␼?ux,e-]O3PќnV&{n1o2lb-1|%1b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: b: t\t{| g0B=&5^"Ӂǟ û)qyYIezYxŤ6`NlGoHm7?\|üV-%-?NVTFoAwRW%p&p5 \MjW&p5 \MjW&p5 \MjW&p5 \MjW&p5 \MjW&p5 \{c^?~U5sRrw?#Ղ*   ǂ^%b!/S9|aڭ=Ϊ$L 0߲SuSE BR1Y,אkgu2_SͯDH9eyZ&HKj[L gUqsYHDTIA'T9d s!.(EӚ-5ӛfE1$;j}Lqw]bj:$ǧO'}cG`_yhE_{-BNB9q-ex5TaG3J9DEI,R0a l¼KFTqs5g+q_;nsoվ]"[㡦<|\^suÃ3&夰VedNZ7RAnRh4̍c˜".gLVC[<\!)2WtvY(Jh6VnHZBA"6I+a UHddۙ;gG\c 2-cmlϻ-fj^Ye' gO,A_$!a;4{t˥9*X6|btl_/ -ex#_.0O;Z.>[wk "r9s+WZNw;E#'"8 C- [10H!#  =1"1ȿLqb8?sMžsF{ (^YD߾2Tow…9zE[E.v|gC>_x%ƽd'CLZ%j  JBFC L]ח#HN}ݸU܊m*mJ[C-v٦q\x,kT)Ðz^no-Pu'QT=T+"x || y, !^{y+hd)NU[;|T"]2{GR`u,NBDj a|'+cCd,Da+*5WUtiۢDJÀ854Ǵ  G{ed r=#4 9,UsQRDr/ V(Vr yl, j3q/i1KK%O]]|:,Yꁷ 4TQlJao +|8ZmrӐA+޲zjjTȈdސK5HU(ѿ33?e4.F ;K{er탟["^a®VYE­>{#p-•F\ÆUaF*ʳ¥|voR`_dvY^; *7L7_5J9⢭WȐd6}E wŅ9({{O{d=e"AeuOѽ*w&E4VH3p<9 02:6y;{6ƚ{*jvNv9%Cf>0mHfh4 roc=﵍ԏ_,3ΑDBTF` ൗL*VZ .,ZQVqJ Thk$dUpl |10XZT9jwlv[Tz %)5vG ;-_^v`9ȑiWIXA* 9X2`9`9# 3қHHڦ\3 @ [RKwQxZ*wR{;/[&YfAم']}{&pWݸm']~{b~B.:^8htet[YiBkϳ~y)(zvenr80NZTMf[Wb}DD^Z:Пj]R:c{y%߽1+Aƪ!xy2h񪒡$-_$B>mu&d3ŐQʺx &٥wm&ou"MKgb\l MR %f@W2Au@Ks̰A δx%dIm"230o TQ :FD8H)!6BjŵԢ*NpG~P$AțǠbHT%xߕҒSYo&]z((ZgWRXYdSr>0(-e)Mc]u_G^2xN9\u_'=sյlJu" Q{f_kt/迵hH4Cل*JrONW 5BP -9u,a}$KĨ(3I2H,%S*.B(R:C/Cc7"j}LƾG3`Yk5n{(qݡv9sU{ӲCRS7ח2Zdf"*D-!r6x hEadr&SʲDGA"ˍ0 061AI (J%C,@1 8Ϥߐ)0Y4)HJhq <JN ]mv՚|<9~Y&4,Zj{XsTZ57P/-3Rl8~70](1 Hx1(j$${eP+Ѯ ȘtAoqdDec+7e XFli]зwS;.הmƏd4O׳/nv3nCse}(E7}+u(yyÞ"6w6ڈn6ּ.;Rox@n~?!,-$4m$^(0C+D_눽r!q?T)kB% r:K`hKӨg@pS0}K#umqc󻥅rLy:~=OgcWphkwSac+4ӆh3,Y5v]hLo6>X .j朸o[nq$-;,D&8ԥzˋrp2raQ`;&ؠ̙,5?xFA?g&N\ 99ttNOǷ?jF<͠k:Mϕrg$+'cUR94^'W$0jn?߼:v&X'v= QMy_@zcUk<0JV!D)V a{ 07 "  K.MH'leP._,U..Mjr~f>S W2\bٕ6榩7>x{#s|)Ψ8ҥ{9,.yAz}(A*Ü7>hf6\ȔcVz4&?{WȍJExwJB"m{lGn; )RfQݖoX, |A:RD3r6 j=n Zqm]P:]TLJj&NM'uRc׷!yrZ͝]n{9k\<|~,) :E). !>\y&:8ƭ1W AF 0$!XuZsD ?Cb@z^ q4؈$Hf Akd쌜-*ewtqƮX{,<*,\P6ɚq˭afu[j_YOA5P23,88sY AQJbCꠕҖ$ST;mvYY5BcJChM&I5A%6r)WJw]/+r6#Šy(w j{ v# T BG͐x!A JD#o$bMTD-d JW!/iKTRH<Q'fo<쌜xXs (숈bwhzDqǻfJom]2vSIr߈6m`PpZ229A^K H630I*`Aq2θINb6YSyLj9eWy!uv%"tj:yta\DEs( "s /ЄrJVX2[wv!a7<|9KȍGq ~>377{ 'Y{z<_Iޏsb'w-g;=}Х9Y@C)RI)-9]]!>'sFLnG~ϰ2O 7DLx!yPND* |:B纍$y&4J(%q ǙsgՎ1eT @ [s {T_ Ȕ |[k]o3~uV|?a>էtmqmk[ԯoM5 NOW?*ّs%OVpy.(Kd1IY39_Qu^WwԴ.f#Z2{CeU!12M\?4\\֖ڮ ⣋~ל8#qHWmaTV mqO6GBW1Wi:*%GOmԖRΗV!eνoXSaN.(~af]yYgqחȎC8:?||>?o 8FI by/~ًP{`Дw547iЌv9f.2C1j9nmb˭7~3 *=oOpZh•z0J1K?ˊ1 tQST1cB !1G*;K}ך^k&VȐxFRh"#ň ĜZSx?ˤlSY1jIӲU y`՞yT͚5mo߼}cdFR?\C^6UDL2QIa%WOHdzJ(rDp=r~,p}nn2p N1  \er)=rypW]+}q4a;+ޮp;0 ";#o~}b2Κ\McQTD~F̉L.9 ǣUfr8j8tT~0bo,~EeG\z;BF9B.k:(z(W kmR*2#tr=LO`K~F'X{ 91KGQ}\_o|ɛSֆAP.~ۯߘxD#-9#4J7"ʭxʕ/ż*,C^iϱ̱wՂM\DJƘ+5462_<\æhĦxn*N䏚$f0E /aY,+RXO{bJ'A9K%u*WC:"&,)Z/S[4͠h^E#bGW`dfWZ8x/L%U=\BR9&B#GW5X*S+^^%\Iz1 -hvReć5or:˹Ͻk',AI2(KN+dP],\)N K4nG~IlԻ;Ӈ fȮh5iUzr= @ug$S+Ymں"8kp!&S;cB^>ir)bZu࿓z,oݢe\7xj@wYqeSlwO!)NqkOgчӨ%G^L{eۧ#d;Jbdy˝u#OÎ .2.j%EmPV;ڭW^F Wn m&cԊT@WVS\wnnsyͅgo=O!hv^D{}[(?t4HK Dy ['4%.RéW'`{h#;wvz?B9u,HYJqxL &aHj9V:鉶6HglY`b/q#ޔdDzs; z:o"Av;ˣ϶GeJK?•|7Rx)($ JC..U`D%, :#³ B再{`12&IY\Ah58M ޕW G⏁Os 5yYT7vxo5v`4-Ӳ8/dHi#׌*,xH6IN(-Ղ Rt$ 'CO+!BHj*=,JlA- [+@i0`=t$Qё)O "$y(zLз]բEf?P2')E][v@l Dv/uYFe5 }{:R:"l<kI)A'QeLz/8UBDY&5}>5^ҵb0鸉ŻN|TG8y7HUA"uFi<4:۠$8;0ĀPuO#Qk%!p Ҏp qTc7.I6cM9x5hm89~cml1/6 kjg9wO~_ַݶmW/%|";{-J0KK]Tm]’)V0C2 $T5:bmܑ0({Q>_ͽNy8U%Q5|&(Pnr\M-5 m5Wq6hp/ܤ{,L3Иtk(jL֛{fjCwi-7,agbUKNjYПltr@]''_'NGQ\E佽O =OPVw|ٻ߶nd7iwۢh/[| m5Jrwx$?ȴ4rb'>!9ppvGO]"8k;WKDNb`%U)H}I0c>mf8x$\rqK̶V!t"s~Yл W] ;*ad7lW_Yo ;Rp#0&-nM|~9Ƨ֏ZО5+W*Ddb&7$k\NW{\@hy6'#,'GoC R Q3ôBbNY*5Q-C$"Cp %'虜 r ީM2FΖxkY>n>ړF_dI ;ڌP0mUMs16"-*Plv]'Ƴ8tM; *YA_B#D6 1 %ID0+ojlި޽7S8-)~Z^\틔Zm*R5|zF{,n7&M&7^-l{˿\yEF*GbHL&"tQTAқq&qY9Eem {Sג^[S!jV[2 yz0hI<҆D& Zx%}&&U3V#g3gUj+cW^h+96yiwjYw?M>c#6H$N.Y8JQ!{mXr[XeF$ęD/lSh Z&]2d;f]ҡt:G,Mh];L醈/\m\`,ĵ*X$Hm dV)&Y)XH9 ]|ݠ7YiILIB&d(hweE͑ۨ 8>$E%9a+/, cܛZcG0݁#8Zʜ}S Iri0YK!/L=8BjV8pƠ6dcTRڀHF\LHO[>6XRcd!9[dT;CzՁli?OjdWuƕ:%.XAi$ee tPA1W^A!{וּ:v凼2? NQW{Fn~_cgF;naw~|!G%_Ynpz|צ xGK4FsnBQΖo&&wI1+ﮮLfDf_|>|1 -JII0.T4iР eT|3J uݓQ(˕1s44qR zPr !99jjliZ{d fw %J3ߥ칼^oF]w7oekբwV~ʬ\̈YLX%DIMrJ\ۂ7@$r$ouzaTiJBȍFn9JcdcAz*:k P Uvɛݏ킢{c gO/YY\R`{#GI -2^M"Uc"v5鹎m6Wq#B掗Dt6X#2Xr,$ PfH民n]ېA"gJ:nMos@ZD$Ǹ+qVc=__J4D?*ZJqL;>5;Uо=WYvd=.ݼm?zpg 3 #H_0Q@_wɢ$, E1p.ѩBG~60:͉5V3VU.aX\IX|\:ԊtE6'v|zRVmFп0 jj^2ŭjQ M(PjhSΉbY3=QmGi~Ӣ^Tk+y|s9?[LcirIq G0t~n]?ƆxgD价߾|?o{ܞ'}C'ΟhP8!AuZEO"#_~}\[v-ܢkkz1[+Y!{8so+m7 8};bxgmo~ʅI®it[IE-UKpzԲ)1T&ڽ-Mn} @w_TdV, DB&% 4@$X0EQ{.u6,U AHznmÒ5>ܳ_x hgZ(uH$) 1%v&!ϑ8D(:rg Q G<&.b.KYf-Eଵ1s"ц-D S͐.'Ha|e= PÇ q\e|2n.o 0ŨQ9FWCbr>3N'uiƞ]l'Z#ba% 3BƒͲŕUXr81ge/&q,KU7<6Jn,qBr;ͽ@ewc?!V#gKi𱹘N>]@|]>sඃݬZj9?l˳>VSxm iEBRFh tL*8w .uB6X`1m|]H&s< -:.m6p"1T9y?x:suت_Oo%GJyXB$K8$OdtK/8BW]ڟo^uQJ7~'eңOGON~,Y.8ro>Zē8* 3Ӿh)LeAHBڃ9+ҫם qAA9!ŀT]T-,I^M˟>xTp:[ްvz={~Mk-8(M`.t}f~9::oU_v>Llo@uZn" h5&߭W py> 9q 5n6GW.TAuPt]{>XAfkH#YYZ=iEF/sH'iȲv"?z`I6<8f748o^Tk"ג6غ4m.~#"c."i]5nv;Rjos<ΆWð~6fC5ܷovk-ߨ~=]U{:1oJuk{.Br~BC;^ʹZ/n2pK:Ϫj JnU SL<}.dKEh|*HT9Mi^e%i$422\*y0xewM`mf# .] ׳}LCz\͒x !٣. -FbhmN"I.ɄIXW2dcpR{ύvGyKz^>ꗰ(\>Xpu? !7$XՒ x1m7=8P0iLBry!%Ш#uT0dt#b#!:Ch 9fbW1{z't`GaRdX0ƹȐldN&m%W*^iġ[bu\tgkX><&1_M3hiYw>8@9p!fq.lC733pfʆ1ijӱA [}O3ofG}O3wpNǙ N(ǂ'iGN HI+ U)c1LVAu=YvK`aJh: $"e "Mg I 1q]͸sb2ƫ=g=)K$ :ygO(QL9wum$cت!duͷLu TɆIJϝ| ,/1|4xs4dK!D`ivQ"5e#D0њhcg4&y_7 ̵wKg zPY ҁ-Ӂg:>aҍ8G|j>3 Ji}grE&P)BޙI/)JD;lhՁ𩋯ix_P s05E0i|Ru퀴!-:Ғ89 ftb×{ŭ /vi /y1  i`0|WR:2@*!/]Us]j_Qϻ{'90瘽m}Ԕn};[sZ3b'B&Ig@߳imm (a Rpơ^SP21m& F1sMO>lQT1HPt#b[GVw+!F>8ݭ yvz/޹8ٗdLWjoQjL'T^.D {  ѱ! JT4ƪUⷃMAe(C%a kB $Aƻ)"[0֔#((U‰PEp>'>$yh.6\y~Z^g;{q?QW?~4@/8?;UE*t*zﻖ%)7"FS C+\3^ɐu)(3LH>bdOȥJo!#hLxrKx1DDAm"c2d *(-ꂹ(dBi HgG= G}12nfΞc7&3bOn΍x~N-jz/~ifIn_&,ѷ<=HH*X<` *QR9rJg/)nL,Xے9%Pd`BOɻ2BSmVJ*Q֤@=70eTί|vb&9"D.joY$Z3DJu=(DW3h9FI X2nbhg!:o)b$})R|BkLMqlt$9R!8@ H$AN34Ɲ3r|~Q~m|w Ε}O4{曆om} \ߦs/N"{w}7 KOjڴMW o4fX*ۮϭYh= ۟f!e7w>{&{rxnO<>c#8f>y-:]o={_4ݗ_m}ym㨘!v?F>} bs%PTsV 6bjΪ]J[K Hr5g P\\zG]2ea :H-CDl2ox6t ܔ>QJ5ͤwUp S9] h,*:F+Ȫn6ܔG\"yMZ#֟?TX~ar+(25B-k8aءcDc~ pܰs<1X[|VQ DֹPH)r2֛T xMCw+:-{*ӮfW/4l|aデ* 5$CU8*ºם&v9r{2< Μ ܟ-|}Ug-g:r+U,0Zp"x!ikV=ȑTaR,-tK!iO2TE#Pv@J'BB:(xl ~Ui3/[գoozPnxBiqLޏp 8UW#j]:R}!-ʜ-2a6#:U,1YkgFa!Ay) DI9 g I sк"9qBY||w{Suf/)([c.P@S\伳ڧd"lJa\VTa\&DIjk<%_mնS\%t6)K,Hx43R4S;):i0"wN R&֝drXJcd{xMm1v[2/P~[Muٜ,#A9$*R%$+-yE'4輬|F76jtNsTn>LɢC02 )R 2vcw+DkʌG`50hMr';\{y?~VMnI Ӥ58X1̒zfEuEtJСwtR8Kofu =|`%LEi(yQ)̆aavr{;9vEb; !z\KB5؋aI^ K[k=wj $.Y4j4[+kWa.̎W]Pw[v: m{ZibЖL>SNǬQRtFY4*io2jHHN(m*bQ&6 < tU+*-ih)*Y<+ǚ;>qrwfc}7{ݓ&FUg?T>P(cTE#( dT1DdXObu9Km2yM3u~/7ٱ58c 3ؕl|;cN1xs7;՚ 0oK8ݩRd*:g5vI)" qRP~z^ ndvvr=gö3m KN9b!迣ѯ]DJ ]t C ٳ]3a1ס.@ۺ6'$,3 FA FEҡ}#*4)`Prж>u(S^}\̏ۛ=78_~~M+\a؉#/so "5訣D3I&AHA K<M8! VN 0NHe(Y À:E'DYB$]<-ao(0x?sYү~_=!>6 T-vjn/pu<c|Ff.HR)j5grlRd$-xUCBBx84g|(۬+]WCC<>[ARfLb0,)a7gIKr6bÜ 2Eudwv R '^'.nZi߸Y`z$ Y 2 Me1="DtZl+e˩%.RӒ oC%"mFi5"U5ui]=@z-UW\ns(_~ٟ+|y63&*WQ?em>]ߌsLVSorFpefط4Ĉ٩ސO~ngq 6#*֞Tv^lk>[{)藫io7w zX㿟tK{hԑz|s[;;g/x? Gy{} ǖSpXasN)sΉ,Ӆ~is̬M6-i6<)çO6a-}b<[jwn6st1Ɠ>7zP<8&NX>v|;ח;\D/\~IrB}OC\x  OZ"u@>MrOW ߾|>zN_dA)׺J_ac[:\'UVMkJg)<}NT[U#+͕2H** a\E髢"j+|!aRa.Wi^"IP p93R!8@ z&g REw; 4ٳo>zplF5<8('#0JOU"تRX] *Qq<"%Ba&P\̺5R2Bx޵q$2n`d$N'X'؇!E*$[$rHITG@$a*6kUHzU !9G@:$UOkB2QR,6j[3FU[UZӅqƾPwE9Eyll79n:㻱͓_?P~Od($gXpqsM{GQ*'2VJ[OQljUc3Ϧ:{6P@dRyɝAQTBߎh(7rx05rkl7%Xvkq,Z[wZ`7,@s欤 Z;*', I4xJN.ݤ7IHeMĎCF8zўt%"FDu^. ģSCbև]VN e<{j)=h:ič*S+ W}8N%sL#ɂA i@J-a4"G8 $iz4O|63Xc 5bkP2v31:[}"Wvzu7N ¸HPqD^ |筰e)-g T/M-WҖ!Am9Uȭ A5;Џτ~dxY?nMv6TT?0!Տep;߽9ZAR;+q̐3uUq8PT;'Kh(E)E{hCYDŽ‘X*p}D0Vo&~Eo "װP& DU<(TND* R:B=JFLh*#PJ '3jc"0 -3 mj#vME jWKi62%=_⒂3_lnoPȶe!o6|{-[\tMtf]k)] "`7,&5CL6`VV%\Ӷ.o `ƇFv2ͣ੒*%X\iWIi'mF+- BLd P\n?yQy CdPg6V-?ef\ U)AF0j,~b=hrRcgawKS-MP\*׸6hŵeiReģZC.& %vi`%8OHg%[UT|VSO ̣Fp ;lCp_eo~0ym\#sv5P.k  y謁%b1=U( PuxP<)Ǹ4:9HpqtNo%Ox]t8x iS{n6w,PhC +3Xaa8&zwݛlfR zm+|8řTS}|nva*8S(f/6j65*'~ ۅ\";ׯ߼:|?W_/^Ż_I(6 ꤺu+]ݻYLfQO'|~9is Q~+gKn%@/~ }Fī]9X5#e+A(앢N^yygO!U̘PM \4faB#hW i|%<)_uɩv%2,iʸK^zUgpJb^`ɁD{"vXV7~q^,DcT޼:e'Œ*+8d _Y+)ۻ~8OBy`M~^ .ۗ'? s"xù䍝X|L~i.ǩT^R{s0Oa|G;_PmN ]de?]@⫸w;YE3_^Up}ԏ\[؛\Gv|o__Jt Ô Ԋi#w>gͿZm5 I|>ro*ۛ#҉k̜Y`RhWs/0I-`ʊy*q88SږNrAh} /:jOG8xdg+jfK HHKG*j@%%W! ~WIY˴p4\TygL>QxZ;_aߺ8vQ牊kJ f218&K͗K-4 S~Wr4"ET➘ gDEA=t| (oB=H͌q kſ}vY g?; Fk&5hpn DL!L'47`hRh4A{{ԺI˸u*-bQ2 >̏&Xm̟q+݇>oyp5~4goW9nɸ@L،lwxGۛ3o̞ l5Iy1[S4oA4gnf-&tږ-V{&Si9~@Xfm\>K?-uA ˽%՚W:ptMgb駊~駎~'~9݀(|,x o;(Pocz )Be ʘIBuA&$SZD0KTdZͲm=P Sk4C8ry3ٞpC8m}Ⲳ{E3}yA@$XHbHZ%رJD ZHMۇ,\[\֣]8lkPވ_G-xm{a%t䊘df4tΙ"R꼶ϳ8^(Gͪ׌=Rʽhiߡ)<'.&$$ $sNY)=­y>nʐEUX()9ӞSE11.DXPq*P#4Z"@'mkxw/qtwL8~:jz `Pet{jבn-C%E 3^)“I ,ʜmQfa"Zh#}rRv,{"9Hc,ȸo5EpI{Ҭ£AJ]Y5rKZ;O ǣpUXzd/zWtg )XpٴʇOC pVڒsKJL1*;X=Gmx &4>FkR*oG{hEmJ 4jw\K=J&:g18o3yHF%`)Mh2kʖsx)T\f6얦MW#oAFC?͆sWz|d_׼WOt5tY?yxKgԵͳY5O䬞nޕoo| CKOn579.u۫ʹj\RD,p!긣TձRP])4&Є(|"l nOB}K_ $'+vt>'wȌkR/MJ\4&@(ZHT-W]㔱d@,(n)XdР IRG -i#oC0Bh(5z:dT`5/S>ԅ'h6x\?NqϏ+:[i m#Ȝ$R2\ɭ%:ߩtoL-%ezԙ]^?ə9KKv3YZhZ-Aտ~߂I@"Q KX@p$ Q3:`t>R`4(ldZpƥ 08Ц %k+knHol-V9*r6YjN1=xIHIDp"4=_|hB)NtQpD7ĤTb:,=.A1]Thә|T1igF8GHIs1 QQÙ3oy'6MQ" )'9VDYI\ )Dz/hE8vqh)n[VVA-m=3{ 43[gx*G[hkϳm˽>(7,IuPԳ `Vub:Dxo@^0d-(˯-<1X8hJ(OX4O4f7Gٙ{xBVRIzZ \-?WJ c_iΧHGV 7ESS3EFvx+G0y}oW  N[lBrTzqaKqg$ ycq7 Їrv]Ȧt"e}~y/ԑ1I=S)hrQG:%|YƉ3>dҳ&gKgU4_)_;ߗ7^ 0PEŒ߃qNϪzշk#x2|^-;w0ZbD4-!ZW#0ZƓ?A^Ldc qST$"kY*4aq]Nr7'O ~gs^ }G>RwA =Q;ˑT8i 21ycY#$]ոs cx>`u@Z9ATfc9"{INa_`&[Y[o;p4,pݴwe$vꮯPj%n^78$0)JdU;N WB 6K%8QB4O;g24|ns0@}d a!{TzhD <3)6bvϓqä .5qĞ <#j ䷠GlDBJy4Q QwVߚ%}}iТfwm u+Pvd7o!dKx ' .t Pgm@13̣J&РĈ m FD%,}0΍ލ߅rXM9"FF{ל<%Nɘ%RD\Hh54P) i8e]7T܀gGtZշǽ2PugQN&io~:9h¬8&2Wi%b{RĮXa@0.m׻X'Á73Vɍ&Ͱwkgm~m]ڨX趷?oVOvV/w"\=A852onv^zIŻ_&~Tڡ~4TIkJR5!XDʼvSEtգ5-崮+JiZcmcjV|i:Vr C^j5iLOrRӋ-_fM[Wٹ?*#3@?0;MfB{lUu|]t-fF۵5|Ot^EO+dMo^N6Zꔽ/RszwO/. 0%/~}rPߚ`xqA_χq&M2ez%t,ĨGtmNf>gIտ,}x0ykf{S6ӧU k]U@[HV'k&ӵK kE},O5S$9)`rlg55G?~ǟ:{mnD~(:z~2Ewxe/udfA?pyuRzQkXl2-~/J5a`8) Bup]z>apzf'dƓq"OYJVTCL.Z> s7EHhD:0 {Rm\?ו4|4.?VtГ*m4s]|ga|H2WV|7Gc`rZ\JRA:t}KQ1of5*t0VF@k`mpoo4>5Rdʊ 6Ya(X S Mq"Ӛ?n\+v(6.ԹYt}6˭ >HBr;/:.]U[*2~7kLAlQ**Ҵ\bnֺ$rYnU஫,ϊK͸p=!%U $h'Li!f\"cuI$$,DNԹ^ AHRVa@(LwZ ZFLc׶FS2mM7iȟv|82"SG ƈSBLU@_g%uA%z5ɓ K,^&j/I"CRt2`QNņO> ̂T)HJ\ˆښ8 5gLpH)fX'wReQ+|vQRH6`Y-WXA'JVvm[ʚXv|υ6U}Zɸ0yBE–2#]i:TzBq䚃c@Q(6FL("T:p=)Q[M ] jA&j3&vViŶ4㮾PuY¥D 7LsV7',ί f8|g175j-D\*N"eJdwhQ^I &걩%hQ "6N0+D{!CJGt`ZD"rOnMN~-ݚvV+r\HbY>1K" Ii-r! 0DȀ 1(QH|@N@Gc=HA^5q2_dY$Ǹ7+m=bӞGԝG ܋{XQTkяE?~R3~lwObP9RQ89񤂀1T`joPɛ N]'\ SO>oCBqm9z"g\Vrċ;/ BDYFyC*QVWRn}Udq_xMT(]HgЦŌZ/bYeO*H弎dxXU ?-j9ߙ]OFZ~(|Q; ^mx%|q-ȍdAd;IѨs$~#q |oU&W}W#aͮL%zJ*+$*/ mvY\e*i/^bSO,qE\P.J|Ћ(4Sby8VwYmXF('+OWp0](/8옰 )~b2a9i#J1T#$u;צVK1}= ݋7pz;^!2Hֿ1(_UOa?U`?f2g 2t<4]j8O S yӋ!EyF8#3!ϲ|S{ft&-|~pqs̿RY%wo\DJ<)ؐ bSs˲'KiM\ !9Mi+۶m 5A ΕFZJ/Ԋh(bM Mb7N,R9S=1ঋAz #K bOH0{ZU+ȎZrH%-9+ǓPxJ\!|R+G! Z^\Bq%2bĕTqɥ{'ԊWJ{q ŕjA$X(7 UDBj7V|S4WXPKXJћmdnDF[;imrJQzP|΋8_to9N2-<{kowǷsr5ClzY;EnP WI_ q6K`3eE^v[b]N}rc_&1mdwm:NA%m%Vr5lB5RX4Quy5p\o+codžw9w#<@/;g3= FuL`R@d"@Ă憀X *ZԆvrdQ~rW^JʻX 8*f.ߙNB)9,Ed~6F{b48j=ɴ]SqK;:#[kn(m:iWy#S0;'[ϻ\Y,XzCfGTfV55n^{!UO)DO\2MK;uG.(a7Z[Jg.^ŸNzm:\//l/<3 Q^Y>v*P"ERnʘIBluA&$SZD0KT%|W<-v!D{xSqiS\C˜K^_B(s/~k9eYԀn}l+qaҁDQV%WĔ&%Ԇ(RՠYw׊OZw{*ҽYSBDBI4E/pkr P!n))9ӞSE11.DXPq*P#4Z"@֦ە>;#g*g}y5K (GW w]MW7>zv^iV.>LtzƂDY !tyO{''\(Qfa"ZFF铓v 䴳FF `#VG#$%NHR U: W d<峽=_O_z)jv,_ȲԒA&x/3zoNZI'JfWt(pڽ#|q8d:ptyHJ[rnI217J;zG׮:uRI+4B@Rxc<^LF(jSgЊfS3YQD5A{Lq`.x#DB0*KlBY#dwTvl&wFΚNW |^UٞoGnnvْIEe+ν^!}&ijl]bjuY~{\Nmv լ:Ė%9]ۇNƏw+ka~ȾOA/ϣ88MtܸZ' ]u<ƚϛ>(jJw5Hr]/J4 yW-7m=QNg]i^\i"Tx-ݶlAeZx-ۍhiw䞵{u62\4) <ՠv(BEH1\ Fd w$`9OS98?Xe2#H 1)q9 Z(tN+r7z=Á ;7.2%3 5{KzC<G׉ߠ1* r2F "҃2%1/% ):=+D:τB)1Ł]i㹳jǘ2D*Bee CBׅ;#gM]dN|'E1b;J?9c#(Q<./*4&Є(|"zTLaM%qqsmz[^NՈvg&\D4/ QʊKqfZz虜춞}OHv(L섞i,9&{oDCFP֣d!H%kPb'[=$KMTٴAl:Q&5.pEphiE)A嗥YO-wHl\[̶ 9hoM#>zf#ޫ >kQUo*0Ku 髹2y\k%p1@5.VKO#Gjc5f)rY94uI6b3c/}ɮX U%jhûAg|zƝ.s*1hAV40U YAj |.pyJ7F: . JB(* 8:M`[b߈Bg5㬿|_avK.pՕbz?a>qF<8;yƏ[V T% w=_|q*0iL3'?@v+rxr=v:np:Cxyl%#W%oXOHt6Xj*Ch\'\ =]9fRI2W e14{ϟb,6qmϻ:֝ FΑ~?ߝէ':9}[q+0>G lHy+@mArCS8|hC3嬛_c\Nr˸wQq+cvWnu{_tQn0aOƗW9z_,oO̯@N.+H67{0-|EE՟oSŌY aZ{6_!=(_msNA| cYRE9俿$%Zv@Kj9˝yV! jGz%:~L;PEx7G[H~3m h#: V?Ө=ӨVMȮF\-Oz2Qi͉ sl~L/^Wl:zvea|UPr:2PJ1W{H|8jj-*3vr uf`#>r/SOJvØGGWLqǡ*:# :I1a q泡k04 䵻|WI/ ͆tA\{À`d\{0O!-7DhTH_ZJL)0">}_Kvh {EtSLE1(9N*ցKA1E4b$%\2N,s$n\l{NMv(C);e T톇,rtˮ4>y:窣ZgxD, r85Y', i2 8-`8›Exh ҃~ɚ^d*\ vv}+3+o3炁+(fs@r" dRKJyЄ9_Njoa =mhv}F8zdKioR`Rqt&B?sq`QKJF-m0 JBtz;Mui#ot߇czw rxa,KrX fɄFsRFYf)p$X`4Q%,DDꥦ0+Cʘtc6r6LYlY(Rqˊ|^8/ݜ-?}?Lw흽7Md>`ObBpaf*aOpr$NH9Bz5B*2p9LXk{[6Cb@: QXS5>|$`EL{/`&M^o:68n*wǛ^AwVhqs7gEpl0W$ aSH@Qwo-Jp8U hT!+q5G?'f`:l|Z
jdZPք{/'.2!m=|e:jOTH{t~?JM+igi9߿Y4yNu*¿ptvu)B5x{BMu9ѩO?Zv>rcNWfs28ctodqoѷ@͛ y+nqfZw/^!5\99M{69\Z2? o+xyL+H )`nêi˴J~ ,Ѱfqlp|T6<NX}A"z^ݱ̎a'ㄘ ;iAL*:on jŁ8 ַއ-7}N )<(.\?J f`N/oTZHNxt9;0x1)\Eqtz:ųg'c`JSARJRhP8tONR\ݟU+3>^gl6ؤzl8Lt,^M3]1C1`[eiE>bv:!jQW7dS~ۙp53qB(%ӀtX^nq?ɷ@oǑMiZb*9^/^iwh6ytC wۆL ȹԶZK,5A=aU]$?dnx">ּ7uHDamHeM5 ytwqc2OH9iMQC<^sdddh0άV(j'H>%_VҶHrR= yO@6rڃkvjN87D4 2n3R3C(ā KoƖX436Cd w^W0rr ;ʹ/9Jjpm9x?3J㚜A40:`Wr6{pnEΒEKE.X)'i$=8adN̈́hA%*z :ړ0R\dq $}:gLG eZ30{-#1![ iFΆ7] OaO#tk5#Nqʠ EpXRT<ԫAe苻N2$5Ic <* 9N|0a!R ˽)YP^Pû\Zʔp`f zz+al|mWfmO)jPΎ3s|9 xr^zY&v61W 0zdw򩕌kpC! [H;9Rf 4L'G9(jcdZSPL@H>jK p ^`AM:DPR5c6rk|J]sqƶPQ%Eildw67Wg4(h48V\ccj(j[8TDʜ% PRAjgM̪%= H ImR!`V0/2o.D.sgN~;vEk^k썩7ӑ|TjE%}c9E,F+0.&/^Bp!exюtErt\0ȁSeb^Nb71)ƝI[j6+6FԽF5 3EΘ R+`Z*ҿ>m4X`Xh9!w[B+ |: Ĥg$ѐ I16#1s:Y#f#gSE.:qɶzՋVz:Yi\@ܡȥe#jrA3+ΌEV/L3wlIf}ӇT 76(qU@FQWlח|ٳ/a,:@n4H/@9a@+5؛O<:/ ; >$Aw.IP Δ$;Avt 9ER, vP^.D J1j;7 #$ 0 TjˀQ!%puZ7F Y9l7yܾt Q2t0R۱~DZĻ6CMF-r IiE1(m&84K.v+QbLAL¬1jveZh8z|n19^5AvIR9,#5T""zQ΃ĊDM 1 u0`lW8. e.'8RYU4c,X"\B6xјI&r%,|[UeuPFhwd,j +, '%" -r*N 0a8WT!zKpuB;]V  Á_*H8/nno=nX!~hލaEj,\ޗ/I j:RjKf(Lw&zG=~5[34tk~2RFp~\klaP:޹_ͷ@yK;0'j;@,%C!fH jRz\TkXe;]۞t%*Yn3]7t:bwemIn/ʃ8Ks Ę"iGq̐6PI*9 4.dq (,}IEq0ȵ^T&&ȔDuY862rd>X~z,֛Lu sui٫Sͳ۫KpCVzCv; |*KV?଺5ͧ+!kC[!g?0OwOԃ^1",DDꥦ }L ` Xyp2&^Fwi~W v2agF{]HՕwSRJӤPʊJ_s'>+o?w՛ҭߓ~B3}SRUW>cZq}t6-c5Dqnq$|Q[Q47;UTԭMMX6+Ӕ- J'6N JJEo5x$U)G%V_1(x8ڡ0Ճ]y"h=Er'=ԇH SjOBgClM{ܔMٸU9uL˅),:w=Jȵ!Df)4k5[M^s)62:aΗ8B:R")9IE$s2JaQiƝV2  B' L hG@]ž GꩅVU'F Gժ)kɒX+NZ!@(BIؿ5<Õ&H4IamP)CUFTBCF<($A+0i0i'{A+H!s#35F.Y2Dƞ S@9 qp#Y%o>j53.|=,?0NC݃ ~Uun%\T',&؞̰xMql&f](Tj B@Kqžƾͥ0u*V M G !X08EɄNTU8Ax2c]@0! &K ד _`F>& E"cqS6 Їr]8+fV"uzھY~k뫵px%>*1#cH eVeWF9 (Khյyx qJ8?<7-y& }jKMt _-_7?/^.Ub491&q<::nƖ])&O:&co50!#%Ưo餫݌:a2`ŴO'@WmNFMSU6dW}%fYZȎ0n:w*X$6~RR<[ShҼ.hv /PGUb_8Xԧ?xa ~}X>+M&,W޲i-$gE rGWGvkgvVKn-@?bU 2]W "pW>HH1&~ Uk2UTss'V!jG:"'‚_ФMo.Zwv #y4Д&!/`̃mXeR!LDic X>co6,U^YK?-yC A Y ,P-R0G΁btTaq:ۙ|x@țirywzѬ㹟&[aIȪ>8®4xO)mA"Xge>Z*w=mAr-FI[@% e^n/i$L\0}Z5"(ǀmP% S OAs|<$Ěu䭢?(=DWE1(9N"&zN鱉)05܁fF}'&mЉawW=-rR>yyϋCIx%^h$Q e-K[HTX4VBli-U/Yd7Fzǃ].X\cOI[޾9^{`͹`1|6P/R2|%gKyЄ9GJK1 7^Iy7&ͻ9hΒWa̲u,ID&TBhľϑ8tJ([`D 2׽9];\YAК0{ h9f,Jf D3dBsRFYf)ɺ5 '&KnԫgewĮg ޟAx,qGx`I򎐞F \ SZ'o,5V4B喍{+'P Z$ 0{kn1Kv [Ĵw l2zk5ަb*pku\M{A-[V<@B%R2PA*+eD˥(`UB!U@'ǾW=<\; M>20Ł=Q*x0zhD <3)6bǀ'IqGZQAkc57 2}:am薏~2 z9jpE4]thIt˨_V#=& !-"XHqՈgXٰz40A4c5_G~~ _a3ꉙÂsnz>Ȳ:^úY˴N~ tаfqlFhrX6<͆X᠎^0::!l:ЭtnD@Jz ߄ TQ7Zwfm8**:jѮڃݱzsgU+|PAd\ QBEڨ–2#]#TzBq䚃b@Q(6F~ Gk ~*XTOp).}Ԗx@"P*DPR5c6rn֌l|rq}uu҅+R2l^֧7*W݃(x<0U\ccj(j[8TDʜ% PRAjgM̪%= H ImR!`V0/2o.D.xFN'~;vEk^kƓE)$ZQke ,rqF%Gcj{_i̗ťm_$ . Le_mؒG7[bek-tVE6YXT9{ d~L !ePt2Ǡp$E8IBT'0ևs>@<ʢ(ƭ9yk~S#΂~ӈjĪoqQN2NH¥d-BaQ @~%,cHII6k"s3>yR hT%+>6g@Cߵ{gN%yҋy6:{%ՋЯ^opeՋ/)Ozd3 PVKLTByŀVN9Mk=ST8Ӌ[kvW!On@ݑ5M;;q A0.q Dя*L?=,y8*!ͫRby[@3˱~«k҃ׯ~O޺LCqBΆ 7./CZ~-Q0q>w7<uD&WiR(Dd L[g9pME`vNf;`Vc{}5^սJ7FuսdB4]̧(R@)h ((A cYߴRRDF,S>&E4VH3p<9 02B[\qboF5.m2|i羽bCٻT7S[w19޳|އ#̏Ōs4#!*#l0dK&HZ+PqHkMPThk$dUpl |1p)ZT9SY{| h8!Wvv}mno@ո"'B }d{@r@urڵR:zB`x(kGE9n+rDs Md|J$kq`m {KX]-»(QjEx6 v5W=ȳmGv_R6iRk]1.U&_ƻ伔ac!j+/J(9ˏmxx}s?yC7zwټ_cίxmy~ǃg[^`qԜw~~,Tv+vUlD%[Jd^z 7WLRNuIyl:P 47h8i] N喺uP,c}~KJ"(/m' q5]!( EJ* $PLXmvH]-u-H) H)4ӊRWE`mwF]q슺"i5"%U]@ue4;;5lg"ۮJTu2XNsʻw9? ͟r0ic;yEk^R0JXk1 u, zO2@ 8bD1ɔ!ؒ(^ [qf sԸ QsXyr5;%BB2R((`bIqeu昀yUXDI3G+B2n Ð9 WrY 2RX2>dXAXwfAYR[*R.1W,0 W^ Wy*/\八pLWjUʡV9*ZPj/Q/W _3k&|PjCrUʡV9*ZPkQfNʡVcE9VʡV9*ZPjCrUʡV2a?]Nk;Mv|aόW E[D!cC7cM"[F Ҭd{/YVY l\B\@5 :`'bN,sDZ^+"~yXI`aJuQ;cIyIBD`9NP KC߇<}s7@u><= K v7h*t:?dXT$NbN0`V"f/1hs]V m]R{]u_BݍiekdHy#!$[VJ͒l7R,( T4C0J!5EkliT-Xk/7+}T݁OLJ'h@C|>4ӎҤ-(f !|+-rf1p)MyV&'[;qcv$wN &}Dlb $S6jQVU&&J'UvґVgXܛ8w^"0<_ 'gsԟ뉸˗oN%muJѾ2_pLL"b@ +=7b*~"x-AK9ZheQ3(TOjX+׭<*#r9(QnO:%fM:"k >Ө FFS0iIRFCë$}h3y9D|?%,i~i@G+^˵' t*"ʬl)eST+:%Zj )dU2943dzpնy΀BN%rcS$R6h!$-TYFjCf#q+DnWrKZȌ̒*CxZ,L)+Gc0^kҶnc)$ӿm[Sh>sK1 X}  /%,I .~Ҫ[+Nq>p;YjVcttFM$ ,$zu6%_q_ Jչ5]P|s_Ҥgg) v4ŷ+] .7 SAOc7&?f)w @/NN!|L v܉FH@8dmN@h"žә?OHSή%'iV`=X.1zf0,.zj!nq; tptXfPLbbhSes:%ѽfGxQ 4dyEֺ? EJp-6J/h8#̨7.Zfw7ۋk\TM.|{19}f0R0g OVث]r4IbpOdlEK-IusKg᫛1;X$E?yijo޴@FӂـѮd}vyE_.kIXl}ŧo4w$u'jלrB{r_!iMwNM=oK%{,v*r1^AwW%}_NV䜭oG׭Hi1RTIDƒ1Oʼ ǎKKAQ)0Wywmm d_56q`xJ)PVCT )B"[2e`XLZzE#6}5 HLXOS28gn$X&6@tGJ9eɲSxK]z,~lT&˓[n-%ݲ}>]I(Qwm-Jp8U hT!OMǾ!׆d>ZXKdv]#␔UDK)mja{I_IF7eOr1GtK0>*$FLhve0B̈ y5c a|3jHgx\ ;wP'c>‡di+o>lsxP{Eo1Tsd4`~ ~|y翚/މX>4>j`b49:or̨nՄ]{/GgKz?]i'; .~(~O^{঍?sgUpŏ1M_sn Az?,P.}MvIt.1'X}l&g.P;]a=P}A=, үf\wR/G+5\^rtJw]tpiFt˨T7ō:L+HIJi0Έȇy[*Wƽ6"mB^J}vi=:|~U?G~k9KF=1X`\8X|Fχ:sܬ(h\2-_gkX86ɸ?< zp}^D/tNjaxx<zxhE@N~%3'  %c? qoýOB[nRW\B}+q3c3n|yۄ"Bruϡ0 gO.Ӿ/=+>+#8U2-$ RD$N^tROz>:IqsK ϒ[5HJqF:FhˬVQO2-/K2 3_1"f[R$xB&u@Ͱe!߭MҀ?ſŸC~0@z [<̮ts]RBiS\=0MJKIhx;(j֫{fjEW<ܛ`<jvm$>?Y2@m=3h޶_5ld]ܛw|n΍iXtstwwyi'J J 紦[!F9222A4(gF+)_Hū_VҦU'w9|)^8oxv;gr 5EH;mH5|'F͂E)2n3R3CzY@/qGNΤsgRyzcgO t?;}ױmpǶ B>`6GT=ħEe'=`^S75y ToN# E.NFf ~51:QNvrm*<Q.9m*S Xk9+O?\5'~h…D/\H¿ FBwMa^޶Y Rfe&#QH!ŇX1cӘG[ ilg#gwB.,5"Lp9EG ˆ((0΂KJ$Ol(,*jd{=IڛXDB* 9N|0a!T)J,Y-(QvKۿfwũO6uNMQ:E⒊g<9v.W=jKon*i!}Uj%m*_[ʌt3S őkPYm{ZJ("AT* @Op).}Ԗx@"PTDȘȘOWW+gl'sһMOfdWY.YWn:~Ӡ4lЯ_9bc05j-X`\*N"eJdwhQ^I &fElhdI5cxHM* %^Fґ0х'[#v6rV# +;vUڝ{g DT 0PIVZY1Gdt^+)h kUXi0x ޵@ȍF!7!>eN2" @A,cu9C֟\>91&BGK!zƌƁZ 1b"y NVFssWQy[K/?I3\^]Ua}˽*ªbM(&ZK^jXFJuT:aIKm}t(խXm+q/G4|,d?8^Wğ V#HLcg-\ %ȗZrV:Afxf},/Jɹ`Ldt2ą[wN!H?k=| x¾pbkB_]@)إo>f޵5q뿲T܇U~Ph'NUS''*\EZ$@"ӳ+KYm{Y yF{Uqmloz}V]֠jߜ7|. L<{cHT (=7Jb#r-AS6kmf7cek@#GSGI#4/oD8X jbᆗd_2<ċk{ʾ{P~T{{kpwYrBΞ=xj(:pb`VGxc97;PDE:ӨmRoc^dR"2]QЊ"YI2hJ(K~!)1w qM{2T^P %˛Qޞ 8UO\8orod5::; n觾fLA0NQ^Y>v*P"5:Bey @3cc IOA&L OFkŎo"Tq%Kug͘Vn?rHk>7VkU!ų:ߓ;e`wV0g%(hlgW1 ||F\|A{a5XsELm\ZH.::S|txV;؞vKyYSBDBIR^Jy`<) W$.Tl™vN+5 ˸ ve3xه޶h_můS}< %PxO{''\(Qfa"ZFF铓ʖe}DrY#XqjkT'R *yR:?Hl8q(gV^5F>E˞گ<yT=J.'2@h9vdɛnRvc9稭4&٢湒!;<]ԙy*z+PwE|9{nٱ$On+0<{IvA5[7ךg{;(ړC ā>@c gU>t n B@ kBoH`:D|"O.F2I_WvO$,b*76H SqDBZͱ#A {ҥ968Itӿ E A?=8Ҁ/{jzxP&ָѴmoJ`yL8r2l\K6ʔ|ɔOEuHT( Sl+|k9m|tocLD"F2!tb2~W)"k uQA76| 4M +4* vlbsLԭB3J;:YNS3DS9OMIP4 t\Y3w%c`AB@ #Ф& -i#oPBB!tt,s}B^VޮX v/{Ё]_M1@$W&1jnNuvlSs:|QigáพC"9<( {t񟯮GUan]Hyց={+[3om,gz_)gV0m7z@3%i+Qā*^!b${Ql4b'y}pͼoJ]R??FގF6;91b&q 6e6M-5{ىKQGrUguM%.d>ץ<ѽq]6ޖr)]u\n%DQ0s,O{cn%axݩy{/0b3C;pH~@D@<NZ\M@2)8H.Kw4(ldZpƥ 08 A{m:D*+}`1z!bE ]/XB\#BSϺwPp}qe]嵱u hzzL8Ch^L͕!Dkp~p&'Ǎ3w~# Dgb>=c^!̈́A(iCQΒ5KX .3M@Ȧ"z2q[.DK-,ƙźg˱|>]o&on҇V֭x) Ġ)"Ѵ|Y+6ڃ+'%ZۈHD;JcԳBKOkiw ģZa!FHD b"bvY%.pyJ7F: . JB(*8|uZ|CN:_[B,y0gP\9?eo>qT3UI)N8~krPpI~{#oC OL?U71"M}?׸ ?ős%)ͥIN(Kd:A|.͝ jﭪ˜ƥx 6!O~j]?8f;c(kO#5_/{E;v.8RoG&9trb{NȊx&^RUͨw!}DyM&z8krnplNchY`HnX?afN귛~f]e׶^7(ߡ9?|ÿOPWpF-NJ.݅>m;p鿞}Ӕjۛ44ߢiFKz<[I!h5kqf/tmϷ_Ö} ڼ/<[5g|U|j~\+?_3=2h>o@ vd5>hyԿ VCn oz&#~~W#f,"&"WA#*s)c<[eR6zZ :01yu=cx N6:qOYQGP#.P﵊;)H &Q^z8.="{:8y<8C1X3 0G&rbƒs2oA3gUbNc3x7Mrޡqmo<޵=~sI,(z4JbFZsFBm7E4[_F&;.ormv67|Pu'g\sRڅ9rdNq~$S- b sYHqnK;8 6>, |5.ZYG-)e%/Lg_i#߾0 Rгd}VsPXDAZ'#'qEa>Ej8*'`{ 8wݙ}遻`fQ:m &qu~QoFr!|7OC}+koE GpnF%^x4$@!Q2T%CN;UE؋EȲ|3 ax:À62&I" Ȣml9 BP4R޵dٿB&3Sc0IggIN2;0#KI7[|C,Ѷшd*V:)Oj%klj߬i9xzbPեW\_L ͫOZZw'HDJDqjZPI\*X(D x!lԳzzјs084BDd+άpK%-gV'ƍ]tnݭSP8?0stߠ*c8"B~ &|&H(:*Pd$G "x[;)ܺ3]}v] !9C7Svd77 SdKR$:pu"D>O.~p1GtK0>*B#&2& ̒Ú1Ni4Ikt>Opm&WN(Uq_K%Q/VbMٳcJim&gg_Y|pl3xA}*?-|hb2;zeXը+%YWru|wFL3n5%|.|_Bq*{{x8}UHi^e'fi/7P_!OG)ӬiS6(|tſygS~~,}>ǜ6Zs2[ 8c_Ǟ>GO3;>m)nUWE@JH99I{F;d8T eM}~I})%7,[ >>vJ)3 _I#$I@v*;??&|=,9dF> Ʌg\M|^He޾fO&" 4Φp8` :YB<g; O'c5ӆWˮ͡CLk ; 7s~W/aZs V9( ;w*:SjdA掋CZt?Y5}d(";<9 t #Tδ*HKiPQ*"@V, Nf9]}7kpxV*H\>e h)ZtQO}5K&V X^N3].8"1 ZhQhur? Hȼvy*E:ٔOW&\dqi I_]h!/^q7)jH5 zT|w 77y{\W1$ǵߊ ټ[9P\U)}^E8dNQ=Pݰ(ֶOc!hcFtSe{o&܌lWgIH h?{J(Nn2-m\E1o5q 1J=ua6/C5ϊ))UW YRk\IR98Kћoޠ3{VIB6"XsW9եUNV_[2s?(M~߼j#743*&E*ùu; f1WiRNHzEǩHȮ)R1V}&B,QK#Q{!YRFJ}BBQNC2(E0k5f,`ZFLcڋMVHKDCYSZޥYM~_3G1k5tF&RCWI&YҀ ^E =^*V!!4 QCvY$@*Ű[)r-wQQ Y2,4=ݕydy7t촐^{|lsF&EpP^Y)EIŜ6wE9UV9Zɸ0y4-eF@l TzBq1(FV#]P-GkQ*m POp).}ԖxE@93Hmfl ؞FmYƦ\z..g[$xEFyȏ] -FՕ_^hu4/~ph(j[8!TDʜ%+ EFy%2(ZΚ*cSE#K{D-@ 6N0+D{!CJGt`ZD"rwݚ[s3c7B]a֬cWX[{JOOc.Պ]+K8f4r.9$@&YV`p]Lڧ i LC !fE; +$8QpX:ȋ`̇EP_'bܙ\ڶcCF1LjgĞUm)pDx .2 F" k+-' ۝ #$$8pFBb%#1$u6m4_̤16#1s)eFl ͌X(+ 8WXgkV)/vyqb=/>Nlp*d06$0p(riH$ !an+nY<Ŋwf-ؔI|7;dJyh`';L(oׯz\R@<{_pV2QfrɴB0/>$)SFC:kUn8bm}"$Br㉐z!7;J$B#}\AH, pHJP˨M@)2~cؘn9\v{ǝYuz9u %``&&AR˨')I91q4 ,8S BuBЙZ]sw8`ezcݑ>v}UBضb׺QjQ\Yd-b +bMRAf6([8Sޞ_wU2F&7*ڏΰI zD!DMqԱ@=(Lƒ sGn%iw΀1T)b+t@Hm7g)Oc.*5bm~i t^ ]6oWG^҅[4^2ba|>=+Kf\_w0K!,( KQWdXZw]PJ$zG("DCtk ӮUBKUBeOW$Jvf3t Jhӕd^1ҕ!J ]\XW*;əP6{z  BiWl"s_ ϟgJhaqZ7:vӪvm?Z O}3D\Xtg ,Uq-C;?nPݻj?FlzoX"tu;X=,]-#CWC\ "+զMHw`;CW JhuJ(L.yWXJp ]%r罫R!]Q!UXu'Lpug+@+uJ(1ҕ4)s)ժA$XP>p!,C|ѳYgl(/P#h7<$*h}eZq덟;m{f %ɚsx (nt~' v^[5oAKSawoHpeglox~\ !J ]%tZvesڳCWB*)JEwWu-QR!]I8zom SڝA ne,R˞!])%D+hw\֙VUB ՞]iҺZZKnH2ֱ8X_Z>J`ziֵoOb/ p}t^O&4t/+> MEx̡޺{?{[5] #'^֐p=س1GtK0>*A!x:8!Ú1Nۂ$mNl:I&yJN8uԤ/6:5Ęa4*}tZ.rE_nUVFX.2:esD#~&ƴ_[h+ԝȮ7 ]2m$qf}`?x|+w${#].KWChv(;AWtE{ڴ暡!Jp ]%:]%6xսѺ9͹tDWӮ:]%Tt*]8'2Ct@}a XnRC4 N>m2ԮtBIxOӏw„gOF/t+e[Ƣc\G8Cs4Cw>%ea^]z~Kn0t)np09sH[G‰uXvhm=l" &uD9hkcv7;˜܊TchuBι`1][s7+}LK8MU[[ݪÈ2t?Sܕp͇7MF~"w{~K@OU6Ol~)׿{VE%uʅ8.u\1 0XfY߂6*fC6*ZAĝ7~Zi }iyk>Gn <&գe;_Ϩ-.Mݚ[Bio.,ٞ(<Į[b˅fϣkUbߐԨ(aUnobt9}vmMŠ[F͆Ӳ~bH/ Ʌl*OTvPQ* $~eegbM\s=ymziJi"k/v(hƇďDGh]4*tcdX4n){NxucQ#zܯ6Tɩf/M׺uw KI=mN i(pϽihuF JkwҀe-|}vV&iDiRp$o0CU7FT蘯;2*07C_br֛v֛7s֫Dh@DPV2dx[tȁ˼HL5`.8 u ) !]֥Ȳ8 mFPB1&dN'iD ;sY7s,ނWSꙷSOo߀>33OvKMrnyGnhBA= V|4 X*I+*Tf.X̹A[dЏO+rҜ#3˱ |ω\笱mR\XW |B*X9I*Rlm0k|ЖeQȑ{Bar2ceUnwȓ3"-\fa BEV6 Dhn' O]pK60Bff1罵ibswp{ Qja GR$R kq 0ƪN6:Ϊd\L28l |6L\"T90n Ug}x2h{_ٳeX~ \W@~82Eģ/xQ m5æ=vEoYɲče͍Y1U6`9ȑ)W!:VyBlPq\[msl4&hHm:S JN ޒ,EZب)0`SAb:\&Q(=W i9ι[GxK Ѡ򃑛vO=GdN=t>VlLȳ4{ ,||Y_ {\{G m:Ҭn]B.~0KAB.oJ]Q%u5O?{@4-:CJYQj[v_>!@G伖r(w;u|L'Kpd]EVˤdVP-O=ySGY[svM5˭d-0e6'cńFpIRIh(SBXa! Z iszjj5؀g2ĠBElx4HS ҦHQ!eƜܣS@G 峸/@ VH' CdHJC.\J%Rˏk$IIm|]sot/1ÍΕh2Q@BqUwleFmcZI,jt*$Hh%QjFnD,HoI HHiL?Z4 ӤW~kro*=Πx&w.dwh>Y_,/hUL)(ҔW[/b}oj͎z;5X.l#j;ȝ܎n %lK4NZDsɶ7 c*j~c8"eUlTI'a %AU Thb=ӬT:~GKq1Y%a$ T:&|&MQd&}0&N(R #= F(=Ӫ87: 9LaԭJ%$&1+,t6એS= L51m+NVP&&uLd֙QլEQլ::J(Ae/{FG%bTxR0"1dHYi6I6#x$2x.dtGt((0rFWufcp88l-Ry}]l.]?:%e%v\LRFFKꪟ퐪A.ELWϤWsiL8IY ΕVFSh_/z}^RSEUBrQ ÑgyAz]D@4S L9feћfIwMm◖|F-h05nXȖ#cY 4TE !P&n&[/ZA-W{P4 If#br<@HiaHa diLh|I}4ye @m:oO6ضpIL:gGNwjnǩԙ hk̐}Y-PY^! $ tee,9FcÜ11 D=)'z] ] p]&jsiƸ\.4=v 76Be|{Maqʟ'p823P"z8KY`F%sI ;L4ZDz9ikv' aD"s3 /) ¨:mnjmBr_ hthNSp3⚉nwMڽݻ4\h(Ixkk2fHDRKy"Mc^I,vd&ҋB&eHVt427Ap$ GBI:i a>dP>ma3q=ҢF=#шTp.Ӭ]uV(Ȧ,)o8cg k1l$QJy.<#L*hɄ@t9 MuÌ&QS{r6:%Eh1ub)Xoi}L's(KS($P^2NA=//o[Yy;ɭ }ae,79*VE2J&q(J͸*"g@4ݍmqǦ,fp "({q EeXw~|GNǭG? i@"JL\2Nj6yIS.‚t(^ p O ~GZ $ߡk<|7`K-#1B;P@sb5M9TN$"(>$seyL{ȸ窜W f7b3%V;QOՓb=*z~zuf) -lWۺ7_q *Qq.@͙67% MD.'Q+Į'IT*qH&Z`ZL鴥׼χH"18rA8ʙ"G!D9-FڈR-;g|3jmt/h=E8 ;AƼay`6R%`,'՞( ΆCϤ!]OeFs ΃AkX"K~-3MS*-5g(.7c##-Ϸu;"E RrHd 3âҌ;dJp0<;ACcW}N TZގ&VOXbP< J1(H` Z5Fa,i`rz' mC!Iؿx^H+M$Dޠ$S2) E,($A+Pi0i' ݅5$A[mAi-6r (!'>F0LRS:!<$"M+ky#·0[7s B  >8jt%(V t(5޽y<YvNN:Z ˰g"vU7A.uC lNuNS1E.lSL@^tB__%P0 *:mqkB\!jra3r5[ϝ'ӓAa0Ic,gcfh|P:ӏu62)KlB=~c`GƐ$vܽJAourP0Ьk%g b|2WRRYU<E{YbvM"3b.?ANϪz> Αl0|9)^B -1"[b|uKM͐fds-,ZR>QLpпt1owћ뜢i'Zm+$ˬ"I s`c,B6YGauH+*zuz3`8:yՏ>?w?<u˓?>?0RxYH3/ӄ47m*v)iߣ]rCvKcf[n(=~.S_Oo#\xkVy2B:=9ӟٰ/^oREQ8@39Ҭ<p_:Z&d#fm_C@QT R2vYJ֎dž0 ).Rcu0yyzEڎSpPD`9%sFib# TKP)X#rtZ9)r3p'mĆp'X wy56=S3oB:D>I$/ĕrRV>ͯm[ 3*g OAs6"PFG4YA<ݳQ, YQ :˄_A€x0 3Mg$3h#\5*Cz Z7Gc.*S!E`_n.9L1\#7gY`Rېx+h(pʥJoj'˖ \9MeWs~p>~4gH>]s͝~ ?[9g j&Uja]RnT*ԍ….h݂D#,J{TΈ F uPf:m.+t.zp'^hZ a"a!- F hF.bcRJ52Kz妴K oMGg; dZn}tÐ}vxGK!wf";/G,GRt^HOFHE&gPex&A0 K ՁI% ;-b;` lɁ VKM'M4ywv? num뫩8(yxl$ LQ"%DqjZPI\*ve,QA#xə?Fs<_\; M>20Ł=Q*xPzhD <3)) zyHOЧO5, fTVRnGU*H^TK~/G D׌EY`6R>a" eWa\q~_ٟu눬wIbw 7 iPuпM֜bB3 Ʌsz?q:O^Jφ2i2EŚٙGI7b;8,z] B'ãCA0[*Y+:1hmt<.@DI~5kxw-Zs3pUHn*=t+q3c7BEru!,CO*˞Ϟ=>*#83-$ RD$Lznr\^/R\74*t_g|:T-ZaVh F\aF ;Ɵv|sGL0FUV4v?Mty;)DDS~p56ٛp(5ӀRY]ioΩtkfL~ tγ-Pk( wu.^O 0ji n$Ϲ 5 Tg -޺_Tld~N>C" 7iZtuh32}8[DZ)R9 jkL ʙ Em Ip/v+|//|3ۘi"O )Bi#`Uw¹a$ZH,X@eqcB8|3[ 3iؙTlӢ^j>gSpnr'm?ѵm%ۧ|@0Fjo'rOr ĕҒ=W@BxoU"JVIvY\%*A\} H=W@boU"JU*Q[eq0;A\1ZhBbPW*Sx%.Mej=0 V% jQ4b`"7a|L?LN'r]sc4 :76 Nj.Z)qݚ6+# 1.dΌKpë['hYQRvuJpz&d%: k@Gk-}.LpgUo2s8U^Z`U7~jd#E9fy`Xn2͸}P片)V`g_2LE^},`SQ9C)*,cO,V3A R1EE eJ2iL^\aL]5ݠJ:?3~T}ï{ˌ `Qx͂VL" KErl sMÞk8nؾVjbeXJ Ch:D#\2yJHTE0nW){`cN(tczt%L+bMje&P <dhXᵌƔMVHKD#0jMC /.Ru/ȹP!#KIIHƜZФxHtְ&(Y ܔ/8,fH͠cF֑aCA1HH/Ez$iF>ՑY ]^CSm Bz,ܒʔQF(p0U aYڜ).DêN?{ײٍ\b(FhcZa҄Q7imML}*u,vTJ d='O8 *tQO1* dA> 4‡|gWbjSIuY(/`T2y(U-˗RBu!wys'ͣz~-ܧ@s#Ԭz[M6E5y.(]c@T+4^8'֤iB;ژRֹ[cOTsHHI?ȷ2UwM(-ɄDs񤽶jkѩrm茚*dlzʓ -U0rJ(#m'XeғA 1Ȏ ўF >#ovtP4BZ% l$S օܢ it TTPtC[B Z 4BsX?W}d\R6JeWz#<*7%,d,M^pl5)|-b n$l M0:9-835giF>l;ov E@jJ94ozd֦RT$ToF@X#6!{ˡ7<MsA)9Nh'F5lGDv`LmW̙l`_Q&te'xJӜ`D7j! W۞dW1v|!7CAAdܡQ l;-EP C='X !a@YPT4MNM9FU]c)[K1 ,,x:L;*q bp0ujMPI!0E|Ȩ nPf ɦI{X v-VhBJAQ"Ł.0̑#`Pn֖u*%dNB4KTI]d-=( E[cI5 !ѾH A e͜`3d5>! Dbt\4l=x_Q>Vq}ABuf'  [X3fKHN+ .*K; 'Q_>G< }cc wo븫Ud}T7u` QCL10h!ƻAv T xV2G\ t56lAhqU +0똆'z$;v %tA\xPV24ITi!Q!Xq#`8ӥ`]@b= eDA!DAy1k"! 5y]AJ8\G؉&50-a cY mM!gTBؚbut K)+ .E[L4f@2턠a*PѠ;#J&l#ϱ۰,AJ3-;gC2~ȃ 8vGyQ9OG0qVcѓ cA8gũI!۔55*A2J~MGj!eќ5MVUFP[OTZs&^zkJB,er)40[T_S÷h5g LވS@Se7؀~?ydfs$j NMQFRbѪxdMAh`EH֔70hWf@2 =pe6.6čtXzg*w6@15fs˥ZpUf4&XTdǢhBM)|x˄ClM](I jh$dipM(sEs-n+G}dAApՋ7B1]>-[N~h4hWӺt( e\Ö]|KekJX3ޝ zrXy.k~gYgOtUS}v7:B?h?[!' w.N q})'k3јN/89>CӈH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q- ?ȫ4C D - D:F'PHɝ@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N#vAa  : ZgJ:B'V:X+N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@':F8a@@uXQ$Nct $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:&'[Z̟W'.x)?f{}\?kf^~ 8FQpc\\o1.mL7.S$ƥc0.}@tŀqq.`(th?{/+FUքq:3bdNW2Xc=dUrKϫx6 өaN.Qt q麁Qn d} #p ~bi"Y|(:BNh+vvIdUF.mXJBWCWY7$2a\Rè+Fk%:F?hYb]1ڨNWRQUb?HsWl8S (thPڭ>iNW{@&X ]Ȱ]v&80 ]1\ +F %rv `0tŇ+^|9]171 uqt(f؄bk/ѴRL3ON{ӟ%Af{ɝmWPOk|k4/-z򂇱aOCso=뻏k34{ཙ iRkkmY:6.zS k)tvV5F1L=}U/dgg ^;dݵ֮.T:;s_Q>ZiYITr (MnC3J*#(Gx`ugYFt D:BG=β5qb{Qn: !DW8:5 ]1ZbQʲQU\x{?0mZ|1A;#7Ɏ4яCW 7S m2+F? ]'v}Bvy틹O*>uyu" ?%2ڰIdFeW+--t+nup0ȌP BT:Bi n< eTN*Hk?(ǡ+kQWs.rbK;WUKX/߾ 0:".!wDh[(`_pr W?~o!__)+ >ͺ']l?>+Cz¹}2嫈v/׫o}eg^W=' >%ff#W˳+~CzD>Eցf~WõU,"Zm.WDiNS+19 #0"W+jrrureA0E+"\͌"nrEZ;{ɕs=vN \n>rEz]DiƮGȢs`XKG[jU;Zvz` +E-HXFW\h\\\ fK"\\!rr%3~G:Erܙ5zݹd l|-r4އrLTCiԆ{LLKqow'63At8c?=ʘh1.Kmp&#Xiv'նϓxwEҎrurhPA scpػ|Ǜ][7--*L7ak%oL}%.#R0Ϙlh5\;OQt4d6rE"!z0t"J3v"\%pEָQQE2$W,X>}W Qq CAer Xg$W\|0XS#WȢLscjole3᪞u];J;p\Q^Z@W{+`e+{'V/WD9`pw+"#B`L6rEﺗëh\)ʕAFr1Q+"\s+5vrENrurn+=nϥVR)o%[ -29崋vj,3ȥAuƚ=K~Lz<#ǣ;Z0rh}d!|::K$mbx"JH#9e9 GׂEz\;NQN䊀m>\.ryD_/WDG:E2 Xq\4Z9tBJ(W'(W )eFrlp9EVQڱ9"`'+UMW;ъD)(Wߍ\#^>;l+{jGkzZ܎Ld \9F، -GT.rE ]RQNP3䊀F\hRjF:Er6w@ GV6z"p ߻lK4eQiG١4RZУJJaLlI#90̎N}R4Tٴ+Y6}Hz~`A~ d#W˳+UrrEG:fJ3\\!+T\\Y4Bo9g:wE&w(aPtr|9:+#aL("Z>#WȢW1ZK+zV.iG+{PԎR]+5KSVg$W{An;lvF:B(2+TWrE}'Պ }\\\ۤHb2"\r+Q}2t"jl75 ɕy>  l֫KܹFᘻ}K7&й32es>:r}1+S7"[뿷EO0 eKLe\@ǪڴF7~.??|x{Oԟ^ iDLeVJIF\|ҿaiŮN/{}U Ǻ6Jb-wyg^#} /C_ݎ}~sdbdhz"_ ^zF9-nI'9{_k| /K),A<.N;ɍ(ID$ZbD4wI+UbK㼺wt[_"pS|ҡsyғZ>L:(Jf6dN`b⾴}Fn{ơrY?oTb[ΓVGu6mk&kwwz;a~G:PU]\SOoнU>ů.wS/³TUIXYR:r[Y xcTS%/EZ JUT eL8!;ͽ[FWu;y9aQP` ېyqĖfqy8ufwf[B,r}ͦ{ov~| .k$De 0YyJ]J&HZ+Ijìӡ*788PMs|TֻĀ \||opg8ϿO=^jOw]׺_v׀9p}9M74]ѭo\_u8ʡ ,u|y #˞"KgwY7@\WAN^BJϊ.*!l 1r#1rhh}IyMh>E9LCp.:yp_UP:Qxw|JZU24RQҁ xbtQEBB\rݱ qC8[-̯y0*ʫ_Ppw$VǎԞNG$J:*]GIy>>>ͪNGhȃhyVDWx2BT?9b࿡" .:tup8gg]s8^f ^YŦ(廋O|3I舔WOoN<S1x{=7?&v-z⹪ niy]*=ա+9q/KIcqjoWO|}y^zgw}Z]bS&g[~orm'7K".d4 +e:(wD颌 i ֕D1J8g F{}jƀmd֟WX-*mu5Z^]r25РZt6S+Ͼ\ F ,mgg>}_ѯfg/f7Bbbs5Ƚ#t.w?ݿx|l^:j'6oo^_߃Iğ޹]nxuUyfb5V'ǝ?\ݬ- ?e.ep֊BzV/Ug+}k|}5fU=0xz2SU:1I*SL N !o$ uAz[ +g0-qPEǢ] M-Ȑ}oJDRŘwӁƚ[Me XNݴ^Y&MR!i[foިA}u;DV!Nb^Q0J^ZnVm,NR 6hX/~QОck4wrN!Fz,u`АTS(Dr˂*z00JY;cz)?xNza}n+t=^DcOxG1ο֑ݯH_e|尢ͶzWZQ}ʺ>M-j4n؎N'egsgdfOO;T,1bڛ8W⟓܊y?TECuP19j,)Z Ko>dW KvG3ݿ6D‚bEu*U]ˮ$>h, TqF^!j#FVJЩ<{唑) ^H#!8ѝnkt tB#ٓk뚼Sy/:u2峼b,Pv:dva8e%ag^ɧ˯TYX֝/?eo_]^=M? n^J{d:?}yJWpٓ^4mڿYa%? 'Q j}Jx9N/M7bWI>_|X:K{ls6'y? ^ZvR_p#P8zb/_2u_ʹ&XG*"=d l\Ӣr|I?gXI6BDXS\qJ~7~.!vS32)=P`_&|ݯ89ĸf ̓ +1pK*dZ2_.Y3;6t8䀆ᴏ#xg lGv2O'c5S2䥨d9H@I/<Ĵ0..40*絸 >qWH~v U{5w\bnPBHQĸC~0@$Ԕb{uvS_l(R-`qηL۫xk(7 }o'`:~jje{v85LPf^3hު⟗ld5~jv}>+^DDn΍zzl\lNZ)R9 j1ed522A4(gF+sAx _ڴ./BwY~g|y~، Q  qgIE{`Le;fd*Gw""!%UTINB|C.2̸DJHHBY> ͬ*2(:P0k5f,`2b=6MVHKD okyzZ%Lql_c߇e0XAq2RF *`. R> K$^<{۬Z I 柰XD|0a!T)HJ\˂97 j8J3Y=SLQhz+Vjk|*[R7:Nc w+ɪ6Ѯ ‚kn^PY7oTP+`8oSSH*l)39L'G9Bid1B})⧔S\-&E.x5sf"T9kmMWi [MPuXpIQz"i nf|sA4wQm0p\T6Thw\uy2GyzEw;lۇ/3WI)2gdLomxrYˀb$u:|\eߏ B]تr@Jeơ'R\胏'-m;Q0ApV=tk<⶷юm]hƶ-[F9YH,{GN+U@.6I* 3ۨ:CAm~ Ü_tiy$ ;QO&i,8װtkN8Ժ,uuo@(_ot{r V\m-4M{N&vZTYZ{`:i{P ^(VLRݒwwh^C}呯삑 F?ȳiU;tt^ܨ[nd^ {qWYxxVl|쵲2aD`R5; UTsc%bDީ4jן 'K= R=V+r0G&%|Y{q?1QsTɵ oō'qu`U1L&T%1'8GR}6+Αl<)G~cPHƑzaH0Vև'4iGDgcONQ ~!FmxV1 r<9C-#i`dR׋mW?y7\  \?LMsw=F#;M3pl֗'6YW+I3wO%GT`(۲xb츠=ɠ=.mTyMop]+tR ^Apcp{)T~/Yy^$zD t,$pM BJQ:"ZsYqU6Ji y0 k,|~{:;y%toǫd+O#o9|vph!AAɸsӐ{@@>8^mВ3D푩RʕCBa9w0ix `;1$Kʠb!E,圴NE %{VBc2n hJ)%:ZTL%yMTݮ;[Q׍*a9OlƧ}wܥӌQ[ZWLPLiLB:pR'/ )0 `o7$9@$ǹqsFxȀ2liEWpkl^kqwoqnb?{EژB~N1)߬Wt4K-Մl)\M>\]|qm^9 NuWJyV.MX(;LjNs4cmYe|* 0pt oI 3` >*kNO,AN)eCp\Ef1:%@N´;[gKq;㛯5VIԠ_拄Zwfo!n}7:+1)}q]gX=g[ yh!^[_uF'$\0^tRzhC22YWH%+*YMg7wj^\ xJ?\FEil=p͟|uCSVZ->ԜuVt[޽1VeA6}mYstCֱsXq|.1Nc sSZ>ME`5ROF# AW тP$1cN<z<>ԓRO.z`Id*B!>sq!GPQ W2`4}vYv+ΐYisxuAS`# g`Hڧ(xwm68#^&{驾D fUʆ+ZLv9j'3&tGXFK3#-һ4w!1WZF)3A!h~:]Z/P-Ծh06iLFM7Ns*A+&qB@cɢDa S#d[2`ى}֑[:u )K@xejv9S.H.mjAq@Ȫ]tpŗ?/[T,7J+<%BŸ\e88#V>IJ*c#02gZTQ# xK2EcvM Gp=Σ: ՟sCރ2M Ҽ >:&4[BLs3:Ĭw\z D_]@33Pf*Ƭr`U J=g> TqN=";"7Ev8yOB^nDrǂpFmMa/pYByvq^ x!l䐜<w!*t xIF2lM8s/ -g5w׽N(g̤vعfNj͗:~x%{YCe16JStlڷL uY ciiu қH, 浀g _'ā/ b͝8*6Q!Q'H(F'䚝|%^Uz2xu.]5}6 K^ѬX >1z"֋KҸ6>uNj61ŞN^k_o"u4ޅ^^XUR (NpR;]%6MΉs:>FMR>MK&ŋi~MT<~0NHsa _?OV.GzO_-M^XCKjU|mKJo骩܌mdyG6ޟh(S|8:ѢU|_@6*:jC_紳[+~Ұ ?Ro{e8'73J}TL}No_.ǯg? 8{o#8',^ " DioߴPm5M͚hZ6{=-U.7{wLvke#^@RˏS4 };.GhYT@LWHlZWn*j~[*bUh4%uMQ? -{U4 ?'~:R@$ R" ou q$TΡ,{L>m@kocl˫k{ mv;ODɟ Q MFf1KI& 4twʝ(?]?r6=ڜX3 |)NH;;=f`XtzN :øf98W2^?LZ a~yw~DŬ=CN~Z}1uV2P3P)e gӱ y AILԳAlN GCs! @\H^u4>' dIv3s4)caR \{n%WJKa>mZRdC<wܚ8:oGno!sUxC[)*1ߜoeZwվDF_qDPf TDzExE:-VT.ioj{i\d9 Z*bWxs sSPxNϴ6(PU܂C۩k*$*%Yժ\N>3ΆTTrԶ )KG63O*[PM!UȨ]^(kх/[ fgB[`3U*)FH1h!JYH)EIDTٻ6rU{Q2Pۧ[lwE@5:vS53NĎg8vfd G"")Bzo%"j-EHe u4GZFsƱT&y<0y=ZǤϝ,^/ I7^~ " *pIfQ%DPhEv,Ld /7l-0<0p0%lW S/VB(EZK#'4rWÌnvJjn& Zc-gǷ-|.V7gf֖ ̄{&1cq9ZMVyFFM{&GlUKi҈ě oc lb'Uņ5.n;S,BD.f] q>[ ![CpN QCV:PyfҊ|YAРZg"Afc.8$7W %5IN&J 1qV@p`F5X3h9맟iQ/xिgS%CzDv{$Dr#чWt1gsqPuI/3lhA0U&wp]FP #2ߞ1FKqe%-%Ya=9a90.J|E`.J0Ƈ*KBv%a I S*sl)P | )pR=3D.N=:Nym<-+sÓB&Qa5dG K2 Q4)x`#B%׍Q fG/k*Ɍ㥮%hB*%J黂-&BC k%hO1̆5rV jXR*2ฤlku+=M'U>%~xrZd5h \VTYГk/^X0ʫ7*7R92CB)k[.*B I\@VN0c!vC59=ϖrJ zR* Iv7YK!_"6#8^qAcH\D43>zRT%%vȹAͤr=d?Iָd]\vqq-.0N| [OideetIJP^1 jod .n-Xyx=@؊CQAj\ |8o~|Getߪ LsʚRהZK?|B d`CW6JXEg\ Y@ Tl!ݦB5b-Ēә 9hHA"Dx|4G<UFN&PMv;čT~>.;2;O ZsQx5:?=N|Z4N /\q5[jxoٍna;&ULϖę@_2 .te0\`7Em#gK[=9( =PBC ŢԹy9hfkRb /$Ms1kkr ;;<::ծ >z/F09ab\Fǵ1; V80)0A hc|I9;o8ݿ$7~NzsDBSLr7~y񂌮zbl<}am93`ԘѬ ]e=IِinU2wZەO^K%JF6]"#9reL;E0EԂw2hG2rR.>+DNd "%BnmQX :,|",o;5Q C::[vȹ5MV֏E8Ių46CcK#fLs[#0L.ரݼ.tq*tqA{D[Dlrz,[Z6}+P9o߭_/ML~eEWU;¥Mcl=|"@oLHtKƽ,kAgŪ&5ϝt2v^ ,8NhWNkSɹ}jF*L*~럛Gh]^t~'[)[ĥ//5̦[wyJ+w~~&VԢm0kl,e #HrA@rNsq^cXf2z\6goxhϕrdܻ1Ŕl^]BZ (W6'6DD{I-OU.ŷtn:;wwzv=N P ]v>'uܫ5vh(gJ<#dL6{CtKS2**0 )!A9$D|nz8KIB D•A#Q"lX}~x.!V6m#BjihpN2r'x)gB\h`FFU"A OJF:@bb,;G2I%1*V$4s gYhlZ+WޕYpvXS}ꜞ*(_xXk琦P33LڦnDUPz_&LC/s܋ĐGSU 91*ޤg,&bGYAE7˴EPj-Jd, Pζ4b[TuLr EsTeJxdPq/p⾥-mh,ɶ^Fs|DZ( ˕3Vf A{v@CX "!Y03u*n+ xDw.U[ciawta0wT$UMSciT66@% ~c5.e F$}D8+%ho]LpF%BA1Xj;@l>A7+%ݔu$">/)ppezy}EBb|UTXxꥈ %EKch"HS-C^b-bIPUWuD8M'2ЙX폸AZeg(F%ӊYUIb |T6_ZL'! !=ds;vJF[-cU015upܦ:Lǀ2^ ,ԏ LIKL҃&ӕtH`glB|BBG1 @1Ad"e+f䃕EJP*gh@$6X _ 7VDnh 26eSQ_}(yYQj2JRp~c1>FOMAUUwCLm]LՐhE1h6F80fvm^΁6Db6y-A}t@`QV]k@)@f2DcBQBGmP D;#:tҫT%4]VDڜM9O,pZnkTt"(3ZcXDi  39(h P e\H-24\ɂ0C4̹+ 1xDXe!cH>K4ؚ5'0]ib8 q,TR ͫU^eV{CEXY{ -Ia@@ ~^3M:)J ҹdM F`bAs FxX~[״lZvkH-k!nj!@3 L[Ta\;5,:Z kH޵繨)@UGhm2uSF_? ߼)5g; KDM:39P/3ds5DM'%\ Ec˄Ht GRDQZSSF=Ú$ T ^/W؊"D W}\A' WLSN#7]u63+v*DzC!ɢaT( Eb'UQ`jKc\cGLԆEs819EKkL~Fo 54VӨV 6`qe<(2LGjJҸh Z`yۮAp^Вz3~o:k Pٗ9k Mme7PXťpÀe%3g@BXbN F8@ޫ87՞- ligC1I`U[.uL/cd0+&)$(  d렀)`$Di`Mp]튄@ީ"DoaL`j?nR(i]lMtjej1 tJ9:AKYPI;$$YlZo!ria:mnk~;:P4yb[t; P  ]@dI軩M|oןdߝw`VXuR Ճ;-q'(*Ƴ|,B%ʯ|Dx@S}~@8X]}%bK|=@nЅ]E[et4`F,x)jfi8gU/V!rȗW;>0]2O0zEƜL('im0S znKi7#GL>=?>U'gta8|Mpɇ̘PZEdɜ o6Q[gp!*s㘬0xf|^9#dž<\AhU)%/B$su(EbeV1Km@U48~/{A@p< -+Ӝv6BGxʕb:HiEҮP m,nyY~=mj3go?Z"Cnjz/RZٍqq"{و/q޽>.g闲ɲ,D[wbD Y1ڙ+0]Tdz|XKy5?xƲ,^aGS,L%+WDuY/YΔ|UȎiyR+5T~#_xwpl``@-U&G~ Ufy?l?ـrF_fi6}?M_.G/jd_0q1[<{xwamuo镽?KU0lwr9s%ֳ}Ջ7EY)VzdW"_)ˋ9u$g*b<͓nI M Q9ЗO9*W}-ЁT1}@@7I4e~ݼ%ݲNJΛ79gHY=c6s?E9|5>tTYXoNc}8xq`6gw]#9vwsEʟ_-KWh7_]}zʞ'#Z,ahN]ٌ6>:x'ߓ< jڽ$M@,XGb;2ReK-MV̗+5H/5?>ʼn['\g1|0Ն*DoѶ>of-72d)Oӹũz4Ғ>pϫdjDu9XL)0,&J,o3Eg9'jyF2joX}YTk'.Ƌj'#\}Py{''W8ؼG+u ~˭kyRr)**#5 I9j[uu 6I6 ɫqv'RvU 9ǑWia&xR8!.uNT)RLN5dpY1b}6EldUD BLF 6l϶B1P(o*UN5FfG8=y_Pwvg>3؃" W(I׹(4WUKjm5@YUfYEê66@&⋡ 8\X2 #0 B̰%b9ۤ0{(?NDMb;DgD<#5+'s0).%eWikEg$wF-`}-Ѣ_vCDlfqD$\\nܧf^r*.xj.?NUbI>:fF93C#iH̏)`0eqե3.npެwSP4C~ùޗȣ5^QtٷH;'kdg?~Pco땯6^M.m}A}R7M { Nh{hAƼ<<0o]8aXzti^ ܌?|<&-ߞ\}l%W] R j}9hE&zjW dt&Qz|\oSϔ[z_kxoxbjWhaۡIz5LK$Y{8U6r^wE2j1<$)YL&Wm1I%sU-&Sg]`s-$niۘoVN5m<"Fk1Ngvʭ%xȃ_ db&iXRTRw>k)I#[ |/ky#U>RkRJ;ۥf+LVg )[J-F$\KӍ+v58`06,< %7:jaO!C($/M<܄;& N,jj 1RqƱ8eՈ\C[@H&9%JQh8PZMil,@E@H y(*PGw ~,B \Ӥq 7;3RyƬ⤁1&(2z=IJ!3m<fϰsłE\4 !FB&h>H`""  `::p< |PYxV2Mt\i!i$4**#` aK Qj7z/3~=%'@iL |AU(}ڄLptqX1F /!!dd%*#x6F2= R=Ť<$Jx \w]$ղnR"2-=V n)h v֔`X,Ӯe̹IYjd0u 6trL'F1I0lMlQf1xk[ѭj@ΓDjK7vS{NGFo9)̈-XX=)ZyF9-)j>D t#"Bkq܂D{=iPQ AP`AH (U") ,(+Cl1ւM[ [Ǝ,Bv3`+BqҔ*SI\ \uD Eɟt7x &e^PicDMƂG1dw ;^CTXd t أtah#b0QԠ380e; Gji&z@) $OAk&ddV DڅҵuZO;Zi*DS&^x|<SbmEѫڂiqcX)©:lti -!W&BbX?H(Ar|d8NrO[PO,yJ.]\0F!*Mk^ LCqc I}6I#WfD|9P&T,<Ҭ** $#4@EW(!Z`4r'y['3!wrt [x#mu7VxOMyNAMNW[Ά3 |ߕKQhGss&P<^.nKH6ҝt 7/I\P0uЃvQ=XAϮe{ 14 $щל)\f H<Γn<}StDIt+$z|n;IdHē)t(YG{L)^&y˜vUj'=c?IdS$'_92uqr[KzB|T8gҭz.X57n)yYQ#={%:_>Guݓ- Xĝ[7RzU,EW*]ŢXtbU,EW*]ŢXtbU,EW*]ŢXtbU,EW*]ŢXtbU,EW*]ŢXtbU,EW*]ŢX|*1h_E Fؗbse|1*< XiuWU,Ե@SXln0޿%GKBZ#} 4+Ls:/Ey`HUk׬x" ]=|=ъ-~+ y/同`3_<_/u6&Uj}O_cccC_\r's_u9ӳNNg1r%oUD.-,I%x"ڔ0:5i4#P3 wgAO/py3*o6@  A3b]̠t1.f A3b]̠t1.f A3b]̠t1.f A3b]̠t1.f A3b]̠t1.f A3b]̠|bZ䧃oN6g9سɪVSܶTC ~ʓ?wIΚ|n{ɖcSͧFNt/hbEdWcGȨZ૕P\ʱ xZzһ3Փs㸗]Y}Wv:{m<[9J z~:;:rRyYqh7Ox<uȰxb:Cѹ|}!f#Z۰1!,K,&jAOT}z oszok> Cqe |5|`+-㊈Hx VL֨w' q $C(sȶ?v&^ViO %c:7Cm>t;II  Q t|iihL~:r6LJXIf=u$BR1$.X4!#k' Dρv-aaLɉ>2Ǝ>rÁ3Fj?:X2pff˛ه,ɏuv>ÿZ^MwRxKlf?0og`Pq^x@9^ (NhXzi/(~["8?+1DBԜ*vsQ쥹mAwAuyK^_V2K_٩8 Id0l Y`xGs w4FO{t_ɋjzvRu:壞Qnt<Wg?x}}i P_vxaӷ:^ɑrtx úb%hKX3j?Bם jOAE᧋^_8Zn{G[Ub{/9Ȼ*)Y.[]^MA. uj3NrOVA.C㶇:nXnwHG~xyۏ^Ty͏Pq~87bL>ʄ>aMOoZF{M`ѴVSyߣ]#|v .K@ZtebM+|xJUl,?GzFV|ʐBz <2^I- *B[ƦhAI>)G:w/G*DКEt0l' ,kc$'e\ QnNv5'.R|ߞWPqZ/|NFPUWXbDQI$l3UGO|jR\KD`.ؾ3ɚκ?}G(:0Oܘge$ܠ+;<`}G6߮)ahbJ]fm[`UWFvXAǗs ̍R;HZ$lt2'uX5<…gI/?Yl9/ Su/+QR;'Nwgl:uy7Oވ/wr͖}oP ~%|a]߸\OT5dݍL5SNT6VR9] ie'8q Yx(5r~ּ,O&lަ1W;*9 s5#IϼA`]x֜YRl)!h"QƧpXZ92*Ù,*zEbI #*&ΏNRo?'!-FRÝPG5QǨT ʍ)O&sL,sKޔ0i BG,`ML0.asWRR|J '6gs UPx_jrdS}H*e nL9{9@HݙsrrBŕ˴8?prRXQbl@L :tB^XXId B3" Ii]Y f2"V+s>r+ jl5W #aoVǧ3b ҁ}vܨEe?-..=:Ұ{-^He lFۭ :+;a-ۜm7?o^jN?}tW⚲V)ܯ_^y Ǟ*6{r*wnLa$fh!3lר SM}^7TMr*-c13!2ly ,1VvsHW[]E2-#IO[XKf`kB)3`NfH#‚S]jIYeZ{kCפw}{@1j$5öua/PQ rta\°b?π9-*AE ,שQ-"GxGaag'6'ts!nm+=idi }02pVCj q,]j9tq{2pV/<D9q[ڠz52=o_foי^zEP5؛&PiVţL499 YOӷ4$b"<{R3Z`5kJpռrvϼG9j9ЮbNjT+mR0L)`.$ćad9 AJd0|Quy=Ɲ07 •덁 91B$ҕNʂa4XJgOɔq,ձ\PB7X m$'V,zX'ub0YOXrg<uO*i;wE2q 칤IX4Yab!.KS QV .Z! :sT8}Y` ϸev`W/_zO1 8\('t2+n\WL=YWI5T iy\UZI[@]t^J9q=sJWo.wH'A`'CrP3c FcrnLD=>(eHg;{OxKeĆ(0C¾ˊcHÆ)`!btiD0p*ybG|LO|pϩg)2Z,!H%N[o%V$j W]oK! _puu9 N@KeV8e*"fJEH$R( gc K#:KϫBs,jy ZH^s0vs\bV! Aa8W̱=c{κ, c=܉8vJǁq* {biga\}8 V0#(bF 6߱"Hk#cRiCدR?$#FJ9+eX9Ŷ AZqv{ I{. R|RaXT,`W1m0Lk+$z$*햰= Œ&PeBzY)SyF#)#S^F8OCp$FgI ]-YG6PEsqA{ @1k 3NЦ3HHa"RBHI+0 Qԧ,83?dhq0EGah/p64 p:5DS= -.L dS(>yVA-m=3{JuAFҘgfgXGF-gYfwO<^µS:rcf3=[UNALWAJ) >R$Bʒkm=) HI )˓$eL !\1WI\I\}7WIJΊzJrD!+ |0*++!*IIp1WO\):A`up%Jai5fnDs͕\^;Mpk`p(yGϟ78goP-d<_3pO~8TbD ATA B(q}2<B`:oY!)i27O.~oH= =z|kϏ'M~z0&O̗>ɼ4%oz +Ya%+dVMQX +Ya%+d1VJVX\+d)Þq`a%+dVJVX +YAZǚ xa%+lVJVX +Ya%+3#v@=I{MȕؓUP{@Z!IY{dbBXŅD(BXH b!Q,$hq{ {r h}wq(Iw9sW>C ]*Lja"a!嘡H-3q2arcRZ$+H'X`4I%,DDꥦ8&ZH0<Hg1|hO+u3>ݎ] ~@ssVaǍO Z\+K!w0VsÞਝH*Py4jTd ˊEtQbax6`u@Z9AT{T:$`EL{/0d5 XHRJ&DpC`Y,-,xGI4 ;%8;YoXm3'[)iHޝo [Zm;ͤپ^"[5_SOkTaLJc7<  !L`) jSÕЂ2؛D˥(`UB!U@Ós_LZS,c~Uۍ&)n{f  ' +pyi?Lou,ӢuD5[3tz#A }L8cފ^vzsvnSA[~/e̦JM_72i:1a~7v<HqoýO6?{OƱ_K^2R߇c(γ7 ЧĘ"xQRKTKb:E4! )|\=u+A34_7IQ0:_. rl<ŋOc`jVAZJRa zNnHw.wߐ5Woao\5~9P|nIrNv"/L"LOacQ2WWt3~sgTs} mMҬtcm ;ƿm;?=u'OoqSjp2iڶ.wRMR~~mE <ҳ,gnqg[o*sn\QK=jCǗ"շ[F]Z)R9 j1l022A4Y(5[$FP;-mw٧cG/<~}NSN{ yaMNX  #bMfCNqo7aO.0_=9}.;{'*'rV>[Api3NyKPJi T)HJ\aD-6ΓCe=.ɞbrԫ]RɶR,,n[7\_;HIk=VQdF]5垲=}%FQPAd\ QBERTaK.N0P@PJ#ec҄h-RO))&@zK mo \/cL"U2zXW)F AƶPi᳢Ɖ3^l^fku7iPnyOL125j-H'2g `%2p4(TEbYVS4gu>x,i44+ h ZF=IROЎ1O8d[;O>|TzjSݤy(_vmuje/~&ٕa!cEMfr{.Z_$UZ{[3'mm5uByLb:%GTlQ E\*~@ŨT2eqg/ۉ;-]sW$- XeIʸ۶Jƽ]wnBIbxY){L,c05ȳGM,GX5u"9KbJy! 7­bh"gaҋME6yY̳Or$ 䘟|:\jٍ&4| >h8x O^qz })a 7=hǝU|,͞$ʺ Dafi`p)Y|3^у=?e\AfYA^nD " m6f̈́wS8\ȴam(ESU}Ss 09۪qƃް!0@uM52@U63z3WaX]ǃ o@U{_ X5ݔj:n )HLչjA8G!D9[SwOX12 x0#*2%!zLЈ5;}"?&ʰ`fIJ)qI <}M˧zK,*0G#)kSuQo\*Nq i2 8-Z=AVt;D=M.MM N*srs1 ۽r3)w4U?|;[ Ti5sKeL 9@N9v֑R3|%gKZ9 SJ"}H'.EEi|^P%gӤtzڥ)鰪 |f7g[bJs_ #dE1$Spm P3vְR3#*ݳTHK'}烣e/ [6lrfAFkNpRYx?wI+[xH)Md DTaE.rƈD3uXO>pw&f$48^4xSHJH4}FwqςQ!&1&K+PZL6\)I%3(긦$ % * ]#p IJrʹA N=Js<ɵ_=gr8is':T:W7>KX˲и[$BYzDUPs4 ǰ^pXۂ_8_E?.lɾ#,u_lv}ͱ#8V>eOlLPvC$)Q+979X"BB? F?ǎd]M,?g ތf;Ƕ-wî;9Fmg*\'\.ݒ"s잪'ӻN쵭]|6Xȝ':ud ]uߣ's,MtR,JsI+Ewn,#9iɧx>U2'HB@Dx-L2:5 JimU˽UIpu%%J I g!qo 4u)2FX JrkFڮn=V{II9㷾--bs<춾eߠމb \br:qEZ=sҘ t]rߧn4+㜸\ӌ\F-Vf 'QOU^FCCCK9ҙv3/HbtX!{>^G,XybHՀ 'j.R\Kf)}qY"+!d`4k `7^MyX8ͥWsxygkv6Q3<;s/Ww.3mS;S30g{q*?UA4Dҗ+) Ze 5Pwu%rԧȥ⎦@Ѭ)(JYj(7-B֍<*% i!H1̣\0YŜ`Dl83Сn] Ȳ} Nuwd0.x쮱2N)6CߚLϹEML!<%(5Ak*+/<)%[29fAc +fG d;P:p ggO dYkd'UU?7W0O]abWsεS,,O, 5`'`¹;)mA Ӡ{aDDv_@D*'g %)YI^;/{d: }AS_Gs*oV%MV i" 4֒Xa)`:(}/z0)_wa <}_zsKz9o g G#Θ1Cm-xuO4 Jk~<+ɳƮ dw:ܷu;ƭ{_<'ce=YYY(RohwwyO{ywz@^ 8WoNُ߽oܻwޣܻ,3ҖL[F9Ĉk]2 H6r6(He&{Խf._+E4[zGQe忐ݔsWq6^kt7Y2 +7Rnp\Mrou/*?k6=7۟~tu\D垯rNb0#Ħk+oU^ӑ5hȓɾ 9¤;ܑއݳ6U7 VCՆk`o90Fھ/UT}~{XQ_qQTQkG1FqiGRIaj({#ѥ}mǐu"ŸY8^k٥X8QZ`-EJ Z8f/776S9D8u1Fu9&&[B"(&ߢew?C]_^sLRg4_ WT~#lr(e h[# ;p#c P=?9JPૹG?^M Y)[ZF:0tp5j2Nୁb]¡ 1~0%|sIƀS.@IL[pj~=No+t^ҚW}MR?;z K w髑?gdmdk!{W<~y7NxQ`a.'+ŜiDZ+R Nx Ҽ}r%K=q1UW\L"R\)M[r"(.P})Us_=Fߖ\i\_ B@媈k/&Jϟ )E j+c0+X!(bsvZqdHiZ7\YaԽ3S^źټ%>~WbyuD껀M{>ǿv~]<cw!"AT6>i?+c6ܤ?w[*XD@;nNutwW?ȓI)X++RO>0^O0]m^(cjW뻇a~}Qd3$+FW@n~Zmgڠt_(~}O?3\3TIC- gKҲp,- gٲp,- kv ׵v-]Kkڵv-ۀ>!kiZZ֮kiZZ֮kiZZ֮ki!Lc:\FPȝ=Sqg5q2Xn}:mÂa<"~4vB:8Fmspz]%E8 6.<d~*|1~}`f0q@#O#1B99s`^I||]46:4Puqη.*- "1%c(H4ҥȠM8͟%?NnF{3Kz;cdaZ{zxQOѺ!,z38B>hU9묖.&aTT!kPuU U>piXINcGMpol(Iˤbbj>j 9ZERMO!9gGR!73AU(76òICN2$%>MxOCZB!ԕOJcDquRdtKxnI 3:;-KeA(Ny))8ntNs,[+E-罊4l!ZKjpAzcL,jE#1m+yYC~ko:$[V1W8 j֋֋ǁ=# 9> b9hLiŇdخK˔;ǍTh,}*omTD(ītLy[\:q+Te܂$6~yڎ\֏LjʀtDoTT4Q rT=vLqf1Y%nBh(1U[7N2;z2>}:{-Q]_Iq!_}9KGQ]O]0 u56ʆH! K -A{w$Ee"pkM±/੨*T~8zW6_zh*iMӺpܥ.m#c]uYl =xυrsr2-YeV:.AEo3o^I.\\O3$R.@(̉(wmmmVy$PUlYlIîJĘ"GI?U(HIR9 Cw񵑔gg_ }f{uQ̕V\y^ks%܃:]_=%R8̕L%}NXZʜl1@$pNU.P,c}x&=?hS7(:ʖ]d ke: 4‡QAGԁBb^YgRy;r{ 4(hsҐakVE^ c@$i|1gax&k0JИlH-V'\Ӷ|۹:eI\mdZK:g=V$0Bz:3yE- BT\f Bm߇ߋWT}xq)<މST12 ʳ(Z -P|Y+6)@)h2;}c %fXr6{.0LІ85#G1j>y့Vj˳2JYy0e$7,q|7\e{˰qHho4$7+5*U8'~VNo| N޽OO^xB}׸ G`}k邜&{P~zP~@n4?ܽi&J5͛ 4g%z6whWM1zncbmo:"/6y_xjNOƼckr/F1?랟U^qcsϟB z#AϓP MmXM um^6}\"HAzG3/ETuyGq<_LmRRlv6!j}t=ATn;p+hN( {{v#Jbh!q& {\ݞN=:N./8}[g(6'ȝ tFT;ZwL/#&U]D WJ?Zk/֫?sxbG* *A8i ݱ$XAӝ"A!ڞ@:O?Cs4\}ery;¼x1V;k{w~1𦁷z_{J0ʱ@?^N )dJ7c++ej ^j*4N7Q7c%O*.!oht{cJ{\AaJ3!N^qAT_!&ۻ4+S]{7 L#wafn̘]Mv,6y=|\v}Vs^>iTe݄wVc^4"{3``SŽNYG͊ woY=<=|yYs66NF>(*8Ke`$: *g%L%ѡqs$ȗsTh@W~}ӟԝ~Pw~AIÒ;k5)*lOHo}rY;5NH&H(p,w(K< AIveDfR9)~4 2O}CrAiMb.4Q8ۿv~/ũVlTӅjNW aKsr=[8nپyp9Ei! !>-\e:u\X#:0N4$8c.{IN!$H?y(CEm(bsId=2lrN :mG$&LF%iJźhVL}Abc_P:4؍,0%sV3 P%q8kpLRdH4ʓZ"oZEŖe#KRif"62jT9JWb!Xd*Qx=$;`<,֝x {(Sj~숈rh:Dqӌw)*omUrvI W6mԠ@q *9(#ݖ0@ QZm$)ZqTO2_ J9[c Ϝ/Xw;,v>:͒]q]c;\j|"b4.2(Cs(I턣D哩^ $([iʀӄspq{,5;vCVnx \jFnm= hbk?O6c]lQM#y Ko*5\B3w\tσ%N;=R;ͩr\PIp뤍<)쵕*\[9%Oɷy w8+q{?KlLpFe ]"b@²Tje>$4+0Rg-G՟z"\CL$Ԣ ?ygLPp£tTJaGJu9i6VL2[o9ǿۦd)`t'kHnIe{Nb9l%ӈ;Vo+fQxsz%gUfUEFOv::L0":c hΘB\ќ1ϘBJ\:cgL mhG'cp\WlC?s#ͫn S#%FL ooz=Uw@vȜPN4wo?;ݒYfc: AhdpEt.Diuʴ m.ǓFYS샅}(+z;-Qwyʕs6 *Fb(6IҺ.hy,o@DiI lE4vhi,j7%$v-'41 "ӂ®Okbff2D)!UJ[HXC<2+!Z\e`\]Fc䊀^@,rEV.WD\Q0"+V6"\H=,P"J(WYHhq5gц?EO;'']Ȣ!0.yZ;ٱ\UбuՎZRh!WEϙrF$Wix4rEZ"WH /=y\pIz(W9p""B`nl4rXhErE\P$ckHXz.:tLǪ Xdh ]R$}i!vכ}St*=Yc>fOT}4 Y2riX*]a-zXLt4 r%bh `iZ H3\fhurEr"recx@D#W+d,rB(C["re,-6\FVxHU ™g6pq֘(]qY|$t.W%zg;\hMGQABx=HɔHFWXhlEəLrCoA v>vE#WH.WDXʕP DL _sJ\aQ$W}+`/h 쀱C4S<ۯEZw; ?r 7@;YZTH @b4K_~q@&FRNʘFXI\Ѻ'hv>y\]D4ek<*\qRd(WtDKLxKD3EWIuuA;J(WV;33?O]~(m>c1#%r9\$ZC+LC?\#^ 1.Y6jG+jG)@+E$䊀F׉X iQJ䪇rŭ#+<\!eϝ9IV(u>ʕV0HuWpEe(MrC;cýL MR#·a?EXngi.cФUA_;#$5(YZt;Kwx){ߤR=̰<989j'{QH\=9唍jZFhӱvuy;J-\Ps ќFWGhh]R*HUbfFƓ!Z\!.w8H+:N:U = 1-"`*Ѭ$Z\eZɕ<1\U ``Lu?xr;ZђV-J&zn%MDr@hh]Ҫ$W=+Zh\!pÕ<"ڮ;hGi]ʕHXG$W+e,rEڄ.WDaUI."WI9M-49B\ 8E9WE$y)yN&b$.<qsXOuDӨLOͭ51.0t."\͜5wAʕ<퀍FF3:?:A6+כx0ؗfx_Gm\{"F?gM#"v[|],qbק^G=Y6qb!E[̊Ւ?V%y8j] 56C3 ݤBgЏG9]<6>d?M2Ðٺc OGvVy'a>=86Č3>N5ͶEbө8PYe? ϿlÀCv\_ oygOXsw|@ma}{j;~׆ ~.df*G?WɄ.>Wla$b ]Vt_/$~]Beh ?Ft\,xn z g˿[PDb7O|\ݝ (F V3l=Z>g)lR:]}1Vf\30 2h>o~.1Ʌ驃^3|G}=ugn >׃iCh]5_̦{n3v 6I6ia8g՗1Vr iޭzVnF/׷ QLf tp)~>$*1wl~~>Ow%&o{|5nތUw.wghw?&dkDۀ{<=9$#&==?}zg%nM*lzƘ;O^7OL=dfIm K5d`וFyY8qkE&=˳y5ur]w]g?m9_tҴ);. Gm Gڙf ܔxX.*0_x+8C[ |UZƹZeTgଵE r 5?p^Bϛ8G-f1pp-ٌ͋'MؘC;dtdp.V{qu f+ ])'2*Q^*uja"yyB[`g,G=|6TlՁϳ~ص*}0VfΚB/y IVK`3ЌsK(eUY9AJ|Ur8!y. brAڛd\|؀I8)G$ctf%괈Cqk#6c.^eUk޵>m$OU4=U..6_v嚧,"rK@a; 7=`:ۜ:kdu cCR)HiaBTVA(%%cN7fM 6HOzA柎+kHYXѲ(;mrW9$I_D!eƜK9ӡlFhyB9LB*nJ#enġtC* un9S}.Gn)C&;-OeҨ6 aQ\5QbjtH2UE#罎l!ZE%n,( T!T{; 0&U[#t;pU܁kA;܁JXcm{)ǘ|Q)T7igGs=d9-,|2TUڨF 5LLy}2U7T+ Ȩvg9:N.ii]ng5ҴEj`*͘4Ϧf!#jo5TZ}B!X֏W? ;Xbt%JZb{_1x[}AepY`xLLvIBj ycRd0$REJAyBW7 3(5,_S~0eba|W-'a*f4ŧgtr56k3虙Pf(+27m%Se:T &Pz~L[܉FUfgZ'R !JG0`wK=JEZw,T/C6;n#ʆʡ>PY*P;FHc=3r6-.vgk<7 /^KYOSU}o#٫ >[T`ֹOj/+}uiTH>6Ex \2PEXI銖3΅95u~#g}N`%gJ5(3I2W YTLJLEA !e֓*}ə}i/0\>g~E#v|zYS̆VYՅˊ,}. JD0ZkTG9db"[0m2p[6',ͳ&5:+ xA1 8Ϥߐ)0Y4)HJJH8q;ƃ)BƝ?G#@9^EL駚4~&tr vЈͭVST:7XPO+g*MxvIH QĠSlA8*F2h,eocng<=y4eBDR28*rIq撎J> m|Ɠa zy°,g(9-#7&T)#%ð$-ã.ktT/IA8ojBf?P2Rfŷ'#} /{ :Ѷ٤E0`L%.J<>KRzƍ9LeVAren=pnNؚ^La4ŶhKd Z.!) ۥٓ7"zu[ߙ)c/GE.NOl.}e l9Yb|FF4A3ntzDx9)aM.<8'4ٔ/~\7igS%}"_]/486Jt|qAݰY_y]?n+KnD6OY66aM!bb7"W}x1b%rߣwBU~w=aEo.hZ+U~,{6eZ!~;'oX'ZOىw;S߾yR4?=0.Nї~g_W"og?xrzlLs둮{7ǿ]|u0/?_L듥]>!L|~1,rY W% 8(rڡM{&̚/%^n/cޅ8gzީD7Qwc%߭mB 2Jj??ùBVC[=bM?##"L81%~gQhk.5fFo: ystk{zځ{&)*6\Q{QchLNx+u %q>tWkn4w7l.ټӭfUP7~vonr 6 ChfBF!X+t ǣȥ3YjnI9>{s<3t⊯+hC9h㭃xYow`#Af`c6XI3WI=L TK\pJSz^O:M//~+Muw7lKgt;NM(Kdv4ԡ^xX+2J]RkDB}m!>\%acBs3vt*Md&Wd؊T-']}Iג\ q@AT sȓ 5+ctPT9o|0m)Ǭh:UxMJ.%2GD*ނ 8њ "Gp;+U^u oglr vSQa5XȖVw'6"$2k)l-IІ&KI Ie[LIz %%hO1 Cǂ95*Մ4))Pwqu(Vjg|j:Rw;N.g[;ēf'D7ؚ+5t=gлKrtyBx#%3$D_P tqTҙDL Ley6g@1f*?ĘpJA+,\PTmzY4nf7ysGaTn86<{—吽ѓY( T2T#er26xm"ˢ"g &h鵌h2v m&iu.܃ tEfQ\21;CAmQ`,ȵ -HRIEA|+Y )J2Od@붼OV82d!f H2ǠIq$G2M̃3r6acϣ, 0sW#".Wm="xDeZdz ]%iKG-ug ݖi#6NJ YH HI)@@&R ؜]{h=b=?$_gg\+.Bkpe_S&(g8CY/=0]R"+$>84ɀ7 :b;vC1nxې5Kjȍg gaMAp}E?*0=V6ލReFk)Cu1!C4\ m2Fc6AX} ~!աLe:<9KPk2w(s6DxBR$ &W&[:`nKV 2b$o<6"Os:Qҕu "'#x ӎ&Y+ei! ӧuU>tuk =wa t*_w3Tsz]vՂu^[y]yf*l1` $o)'@j>f /%, y':*_xr0 $ߔIzZ~߼AR;o\0*)ECf(+wiJ,9k,LW4؋Iekс&(:'נ#OBۡ7mcgNe:x-/uBA,`l]<١ۯ`v̛I8|g3"}gz93= `_NL7%Zčdpy2h|ނJ4/%s;%:Qek2 "T2_9.\¢wɱܡS#zT5x@OEip',{=?񠝁֔e(j*-ۑ;n7t>9iP:ݺh{!Kh91[ (r0)\)*).9c||?pڿg )<.I;7a|_dd^/s +JeD8jf=xo!<BѾysvN[&꨸q!eXd|xMR@y9j5"L re s4hz!T1qe -ozѲulgli3,6쾓+A4$շi dMe ;-%WLˍ+KZtܪZ8ʆREq/h;--!eLJjJr38i̛i)/:Xl/ZiMǓE\x:nt(4Hft0 H'3ZE`H"#AHZ"FAcPĸif º sEf J)P$A!dGSl1b kMhhKg'aj鱖m)aD'Gs  Ȃ44'3d RK3(o}LIkw2t[FXI4u6@zE :%<݆)Mي5_xp!d?n[8aIh)1OpV}ҕ@rQOnIRt\ns24;i&?AǹՑQp- Kɮ:uoWU j9$Ybe|..-npm#YgCW/k?DZ7&Xm<\"V7gol4'#V+ޅ!7<^,(%I;׽ɕ٥9=[˿8wgW/$f z-̛z&\lqp^ܣeHoƆø m*ϵ]t۽B^yy3:V׏z>ɱQG檉,QOz=>@}v|Wma〞xy~/Љ__vXy߾:pC{Ͽ~i__q8Nو"A{ _/~Кm&#'|1R1ըbjwyp;`eB66kf/6{/B~#}ÆsțS<^{MdlS9{WsN5YK}cW\UMoDYTΈ\$tTCtNhldWOv|wL_BvڎS![mݔ}xj6+ r0OFǹ%~A>aE[3@{8@M_ŢYZXlY^:^B[SSl@4 Dn?M<0i]zw I & R{8T"5cr {sd/aR t ʱSKdܶ#uu]>*:0뺘};:>.<>{rl{l}3қ^7jS]M\z}VF@6%"u+SGTLӕ??9D9{˩͝aՙyjQ梩%R9+Tr>)בjtΖ~Q7&UR9E#*'\B\>\3bk 97hc,<[C R|tV4nYI D )&RTKb0koF/˝Ll sWg=MciЬ)gkl}5$dd[4}.Et6)p{F󴯇{ZJ# WgJFݣbYخBXHӿI_?#w ]XBRvW y3 9=5'{yqp4h<lEyih=:\I`c#՝eDžI $XzZpeWvWzrACpꝁ.WWV+~pեd3\OW=ufW z:\u)7[pV]+,; ĵ! \AZW2hWWnw=ؗWiV<_y>{qqcaP1%30Z]6~GY.E1`3:=N?|՞+rP;n6$4s.ͣ6q^Ki>x; pPs*ť ]ZO7t)7|ANv p9ڝ+ =uRj.=Ω 5jgKOKipUՕ:Ă7•xYCmUr7GG}@dNĽs@C\տ[f^ƣhO(E^;y\$}ɝģfd~=;XxLmERfb^6^{#{A_~2\x,IW۬Wp uZuCg⿡;N 0x{U'F\Hy\X;|@2%#/͗)o!lM쩿B}9ŷJ֛NdS`kTl6S4?y?aZW9*I9"!+"yɇg'=g`ꟋE)+fRbtl \_"Fc{KrA|j#,!lEib=9Ul9nwbhI xlJҞҺZcPOBdR،@̕4QkdZ*b01H.i;*-֛ U7Ɉ.&Kƺ&H9.I$Wf8H%Ule[M"+0flcIkcR֭Fę>bTgk&(3w_ZFLAMHZ)JnҜqe2 IhQSJ [QL>劍U5ZEvs_+_[W,@ #T{x{={{s{q{X GW fe"آyLX|/@1x7 ܂^rbPW M%b*Ċ3$c< l3„k|d S ;E-’ LM 7DЀ(el͠j`o|ͩ+1eߊ"0)5 4,ZR/vm,d2%ys qkJ1mdYLC00gW!L.")(ؙb]Dƅ|փ5x&Yl{R AEgX$JQ.`p^F PǓ,p@:? rB@광! 9GKVcޓ. PSX%x0KէNWrՆ 5PAe^+m[B)aی꼵 Jlj Lo AULKUT'fcDeթܩ̩F$}LNby>zwzVOMgz YA3dX=,߉iy*}m@-X6 ^#x:p8(ҧuXO%7іңwS$Vn\XqxRaB'腵Ap)#ȄO+ ϫ3 3Z0ӅqaaŁ~S$H.:,_S4ƃݚ76x n*^NXDunR& :Uϳ5@Waw*kVD%gF\V3CIt@Dž&}]|>;?|[N~ ^{@#-\LA@Ɓ^L @lG\qb 7lG`n4i_͠}2A5B=I-? ˝ rFpK&k\K,yNH?,BU?c7n\Xx9)H̔8t(AA~Ѓ\@ #vGxc]բayh?) >QSQ!|LPtb\!ȡR+Y<Lk~B:b9k4Q9Dl4؉75Ah(RGoEU^ "i‚Z4ܭE@ ?oo{ .LCRj DƅucOr1ΧDz|xL{zxk̵=2b`@@a4&ZpLAmt]w{GQQ$puZ*5fݟFHGjсaKDl2F2(Ϗ"Vl|7$iRZԡg8`A/k H!xsC<"koǕ`GUuNKdm`,(ӗhSFX ӜtȊ]OWU &zQ]Yv.wh  )RkrMwiOsHyha>iwػO=84oWJG7`2veɝ!\!]-)6[jVCa '\95n2`ݡyOFrU49قNgACSEjl51h w@CWnm?4:]F]4ou?3]dn:5uզhDzg6aנҪ]/.o'A`Uk:sH֞fڤ@:P| mMpäe~ُ~ZD\t\OkSvXs|%SsOGg ZzB"GL\RZ`~s f&? }J>.e^a\2#F}95uCwD,  ꦡIM)@'mtݥ5xaB$#9` |\.BWZzډ7:'TtLEw c4%;ݕJw/_nqR+Bh*+b&E:zӮ=l呻Z? E0 Uhœ%%_$B"ڧY D ]@GIE&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@ME%"`b\/& Ĵ=2뎋LuP$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MihI 4d7II $z#& D1"4 4$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@NGbI $P`$O+I)&B@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I %֣5>/ϹԔ/ﶋ rSw?kίT0K .%g9%/&D>\"ʨI>izx$銁]1nvRtE{bJ@ui昭 ]btŸIL,h)TW_Mp hG0ܣN&7Q\]EchGX :׃]1mL)UWr+1f/EW`(]WLjBNFuЭK\qփO/o: goϿ]6?<ƹ_Y[0ĥM㏛kA M3A׋4&,]D US4@Nz/g^k.9@p.Dmnzْ5[SGv~WwuSԞ.ߜ/Ng7_kv%_UӉSjq%ct\ԦR U5⪡[9p`/Xk+]7of?O. KץO= <^~ݷFoVsi=VMJnoڧ6Am+U~1}ѹJ]CݽohK]W &j}Z/n)b^ 9Tyٮ*~r61>z~@4a,gpIRf4L_'gJԅ)hM ]u FW Ak&SV:3)J*"lUqRtŴX'ux#itAq+MOrk=@u5]el4Sθi/]WD ƪ]>- k<{7R0ܱѦ&(-,*CA"ptŸ.Jӎa1&+;A>gq3Hb)+@ p=Jӎv5ҚҒ$ M Ѽr@wU>pVi}8[ 8m.vI>r{G1h>ۻ4m6BUut6BN!'Y%E1E1 qCOu0qΚ HW |CpuRtŴc5n%m銀3b\+?"`UWSU$=b$gpizk0:Rp9IU `P7yZQ4E]@(HW knI_jgJrtwl0!g^{p;;7cG ő&(CaEyMo#$Ablp)bZ)-TWѕ HWF_]1m*^WDYܛUWG'Fz#FWLSS]EWhxw &`dqEo0`1 :N1W*hHrj#٠e'Ap8,f&Ǹ6X)K\grGt pA)b1W]MQW!銁MnNδ`JSOQW@3iz1AMt]1N'+~T@)",h*hz꺿dp9,$㢘+ ŏ2:՗+gvkzg<;G\x.q4hэ:]WU]=y f/EWD KQb L:}u㎼@ڑBHjBjmb4'.\Dgr>uyPKuK3)fJ4S&TKO>w/wY&%o'5Czۻ֛ gQqYZkK2%&/HW :1kL3eaOUWUtm ]prr֐׽tŴɔ+TWUr g9ET);bu]@#itE]n1bڜKQ&/GWvǦ ~?F;,iaaXvRЕU]=- ]Y3z \]3t]Y~1&+w}u ZgqS+ux]%qttى1Fdi+䴵GqgN|LNθY:93і>#Jti39%I8]m t]eu0ţ%h_]prAA6uŔ٩&tEv:ₓ+t]1e *ylTmLBlRp^5>j~c[]/|Сb Ȕt:!+aC L+uXTWSU1'I 1C1"\tVֻ_eVu5A]dt gtENdiǪDYUUWGUJI"` FW^h=(]MRW9%%M 8Aы mů]e.mE&Ǐ Σ őhHF [csA}!ݻ9wgu5}H◮pXϚg4^K7A\"wj>??t~':cv)>W%'CQ;qK$Eȧ/ Jt?u5{]]ov͏O]s5w g%yk9$鼣k %w۝z1K}/G}o9PW?o_UPٟ|]_vYŻ+ٝπp%֘MaU{?vt_z1{q֕z*krvkRosVSeAѦJрب Qb+X2uQtE䌮wax]3~sIN[Y33xb܉K :7vJv_VKhU঎zoު'.q!=FwߠVC< (f&Ǹ?cʠ OS:Ab`/GWiS2k umGy_]p@9)bڧ. _WLt|Jf4d$f1s?bJ *;`銀btŸ.I[20.:?~I }5 F;V0\8@Wzl[018A`ok]1)&+G~tIq]mt]eq9+'qC+ŏN+6bwFM;hOYc  h w,ohv; 萼u4n `4r kq>n`JD7Lpl5Јz'EWLMbʭj: H$HWB+ƍbVeQg^SU$]p(FW]1m.(ul銀qQiC+L)*{oARE 2n2RtŴ"t[E *du;xj>=?|۵ˋKxN_z3٧>_oz_ƮZgaY!.MU[ M ylj}nȿO)<f_/3af._I˚(U6GLsX tK9jee1zG9__vS==_?z^= \%<<{Yo<ۜ$yۙf ߜNl3ny/}vRC;cwK}&S(ӂOtBY I9U$M9D zPr &9dDz9ee9%I_D3c9d(Ad r*,N(,UDR!)6% LnBьc"ryel`'n0p9imK,`b[ώcY%0dN:-*1lNIƲsY2s!d SR$FEA= XɴGg6񠮇ǟπ X@SnygZ5ᧅa]qcmv/'?~_wXQz?$LbawYY~z8|ky11S~z_ "/e/LYWi|!% \0+eE UwkJAA\ a?/oV}P]-_pN6KԲ{AZضIuS&ј| t Ji.:>6'G`9  F GkC<3gg}s|jhN'vP}|ݻ[6e޿2Wˆk:p6L^:v?PkwB^X  :搁,IPbp.ifHQl5&!ojێ~g৽6Dk0)ܙmk\ûAju|`Xl@꽿ƺ7|O!6LɭwMC?ozyZ5s;ۭ$z\)>•iXnjs1{ѐL:l0p\owG1Z~EOѠW`WV>dЛ;ev0W37tsZoqjm6g%R{"-umiU @*̀$B@"B@ 9fRgXӞ>Ԧ#MDal[ Od*)622` a$Ed&D Lx@'p!yL,+L&*r ˬT6fhD?uFijECPN|B6[UY=wt&vξ@}18CmSbrbM'h315gf̬3ƫ1+pgy)ĩ;2EvBIL52B 8'eSG-R0Z z3gK@*o Tf#Xt:CB%B^"KO4rFWqfgƙS|BrʟF[jJcWnoM؅2{6 U͸gT `ubxl)rWxuuOUd]SfF7WTY(+d6"d7`~}QROB$+ۜ\GoWD ap!1 t^ G*slS B>R"Rr& | T&p1z팜=W:nU٧l?Od6Z59 _h ;F1 HT5`KֈVHRPl.A:T8NU$ET ]A-"@BB C k%@>1Ǝ3rv *)rT#8B2Oύ E֭,| [|]aNs459;&URTq'gM[QRˇ͌'oP{kF*GnHL5Q,t 7Պs&qY9Eem {:ՌښRYEgm8%fRv'$päA Ϥ֌ǦΰJg38Tڳ.tᜢҶ]AƗI, Em7}ǃfsO`0|?`gH e!'R Q< 'dU&k,Ѹ|[ÅYhDΞO@$66Qˠe2%ɷce.JsٰziNWaX%Tvgq*Z۞nx*t@*@$ kUʀ DV(9fdAA;ՇYiMC&d(H2OȢmIGQ"9h3rv֩/G2=VOE#vjhDwֈgCNҊu>B_LR tf@Z9(1vZY`LK-虐HH&Hty2><3rvkHCH/6TN)[x芡^jbF,\@4pr2A`Q d zzd]qǡw0}x6?d|Âf7 "9e:=W?>GEl‡j+| 2%XyM !c2\F+Ah2%g+BY煐YN};ꄶћ2g2VEHNH@Xd2ydFh<)HC %7Re|אgNH3pD61%ObOF֝BQ ?ܟЪG,~IlȨ 0ٯUc/=n>{lD&$$EF#pQJٓߧ= Vʑv1nÆ`dYl4YTN㓖; as&H%[kz9~0X”i}C;B7|TE׽AmwXW}qo9O>vV>\|=MQ $VGT缕6k̔IYՕ׿:ɬ *Mx% WM ρ#X^AG`RALGa+~``a-bS;)/cntS㨺 uLF"eW8s |0L &%Ee >^Ipv>q:~g lE\R'o7}Y׎t{3d+d JekƘ(K t}=@ȞB{,`IqBEC`X%4(e2>BQZics2ϵ9AIи \nGWuʬ\̈Y^RK҉i =$/ՔM&;w-Lߝta&,uYȍQ"Gi,dcAz*:kJS$*`==V'GkO4}ja+ٽq9RVGT-`OClkS4AkҶH13? tZnF/I6X#2Xr,C%H̐;YC!inC\)BV6yCQ[M*I4tֺ$"iDJǖD%4||[«ͭ_W2c9pQȌSsr=[?nׅyuzF.}ַJ}ogߦZjqzRĎR2yY{zQӄ%w͛6N\'F0KҦӽ%F_/;?7ݍ/q\Lou\OVE.GznF;w 5I7$n6U7Wc67hY^&~,X:Yٿ鏷5ΊrY#7պ4˼֦q&#bE( sKpuDžmj≿O ד۽ؿ&v?g^?k}z`d@bI |T_\vU57ZXGՂwiQd]6yC3z[g& !^~NXSv#3eea5'y2LM+D6dIWi-U~Hpn{&*WexS[ ߗ^GhQ>7k}"B`%S)y#0O" q.rsU:Zس]oǒW>oI16%_`)1H|߷z^%r(L4{~ΆZ|=EډSpPDX sJ,؍ Ӌ,P-]B9`ab2ޞ4r3 4&v;Y,v'Oa>pga)gZՆ%l4T-6¼xw.XMY R-v./a~7Go@ ̾s݃)iu闯 CQT/؀cgmI 9Zr&J~p^. I|r[cR;ĬW逰 OjSOA҇mA+;0T.sNGo7!bt@!KVE٧.펟@PxS_e0Ʌg^}n~z+ܳegEҧ7:"Sa yf\ڠJlsڭURQ},\ N4tJ(`DP e6:[0sYzp-v,Qij?- P ghc@d@4#Nz1u1J)eż>u)q ?ީ78G= I@;X !003y'8jg9 z!=!ArC7 !EV0iԀayk% ;-b;Xlɞ%<`hf; nyo뻩8WPw έ.c  !L`) jSÕЂ2MR NG+c* Zӌ7=+EO+y\; M>20Ł=Q*xPzhD <3) J upwl$OD#K7 2b$RlLhVϣ"U& ϛOd`ybn48$ehQR:v30lH_jB'}G}wOݑ :A42ăQB6ՂΔe""5܌!Fi9n.9-sx%Zs?3)Hb iULʬ%ƒ Ʃ^f׋eRi͆m|澻}/qB8Bfl^x TR(G;DU.m&qCRsb^/\>^jgpRMuF k}^k"[+^H"d?)uxjR^?x9E +nߙ}ӗ￿r?~wr sf~?3wq?'xDj ׫SbpvH~=ԛ;ytY;e`<7.쇻Oo 6ki'I@܏S%W)}XaG6.~x ު4ɧZ\ջo.(|*? ϕQRC3+`5Tu=wq9JN Ȅ;_M~K5LśPAk(ٲ{fц>n ;+ib$ӫqeVLPjTF1ocG2Qdw͗5_T;/,xiQ3*HD4i}?~fs4Ϲ=[_SDZ)R9 jkL ʙ EmIpqŏN+戡s[@6Хrڃkvjpn;h4 %$Žegf(V5" t:?)ƎaK*vǵiP/zT "ƑrOe]m 3HcӜjgA!p#Q  u{U,0#`5.HVkDKq?9yqJ=/`M9MO9IUS}zUxEۖ~3\ьya*:p]e?%>8?na6-K&?n?*t͎L o)"9_? Ջҷe >\H poVMkۖr'{Sa] PLu\^*~]lYeuuWA٠ όQ)|/ ~~!DòmAEsFw*՘}v1,yx!;97O`k\mP&q8(( mFPiH$g#gz "ks>r ;E6`L8  l^,RGFΫjʯa&x؏tFd͢گ[cHƜ*l~7Q奼!s-(4q_[q݀ce<`PC:<NK(AZӨraoFtLk.PHc7f4j|1tPFXp4rОV'PCh*zOz>?_% 2|.mVboU#z4=zxGXg0Nm.K7 u^rÜ9QHnɋ ,kKש~~G~ ({m:.h p n[ &N#K# Yp/9&r6ȹMV Ȣ`;X+\$@#RHDc9R(:!x2zpx|lU> rw@0Հ{|o> PPB #C$ҹNsSUMl:Q=J*9SɚڑT6d8l jP1p #18!\JGZ(:Qd\uQ x'R*3x,ڶ>j]6C'~|SU@yUP5ۜbLiS{.vR" sVXA.%T+My0DYE+iịUhRC 62( F刊@p*:dK'~W%{L->1)VQƮh-#Y*,p/N_#LΘATRN60Oo=( dÆM<߼mU`k@6$ځFIE0kILjP5II2(*+,vS < l>,AG];w7 FoxOm"zVNFt"גViKL© \,K +VPcyٺ ہo7lR[2O8sfkU&K~t[0ۖ8h-,C#yC4Fʿ%l1 4Fv&D1R 8#qc$F\%r8qUV\=CqQVfTWU6oTK9.fB,GZaodlXUZ<&ϡz KD12PO>nJ1Jq[ =^J`W@fkLx1~/Ls5FHC0&>?`v~`?|;0:}K@paabse6 [u?Ia'* \~ +^͙"<`2;˜k2B/ Ƙcx8ƽV,ElW+vznW[JorisN^cg-@rFϵ,wJÃ&=DXvy/^?~ T%`+$T΂#"SܷAj-q1 *xy9G ݑ艠RN&Ј5;$y0A&S)m`qۺD8/z.--C3&=/ žEC`Z8qһs.qQYJZC۱ Jm>뫭.]{oG*)u6y N B?%PIAU )%jZTgPRJᅧ\hCw(, U`D%, :#V!r=0Ϗ2&IYX G _ЇR z7$)FMޟdc|PWlqL]_&8O?6rͨR!9͂ -dmRP- u. =dzzY ==Ր Md!qNAԋ@%6 Krhy OV׭?|DӾdxQEbDX,Y܇6*::2eG@ MGbRX~^W-VG !+CYJQin'p=q}" K%@< p#FF,c8NDtpf'đb:׎sG2x3Ŕ;1aa|dido"S㚘JU^B AmjYmضYN7b%s̱G*g|: x[‚m&l~8byp5'4Cϫq!╤j?1?$ɟMqGH'6MgȮ:ViWYOxߞ6ÕuĮY\Q ubYEvVĵ_ l?1[Qo=l)pmFU>,|\abڵp)@cő^Ӌ:W ulE<(҇3R>cw̆c{hcj~zQGϫ~f&؏OWw7=W=~Ǭ]܍7œi^깷rO7|^}>;KVޞO/?jT/S^ 9iȢNުiq>h244mtp5[_$+y* (O>篐ƻ\Y+roK~ev|g.ޓz…3l&ke|+h k? ۄ{b^M8?ki=yfU(}?~S/g)^x֩oIcj#>i-7,aĪ%59 Qq׻s_/[x_h&[7-M@ȓ #xc%J5~BXNMёJะ[e8I"9ЧADr1K5g9YY~ԍNgMFϪ+(7OUP/yz… -?O 8!)joC{מ6&t{&3;:If@*AkX5v蚳B)uT˜@6X^JNԣa@2(Ͱ5UcCVGs~)P:t2(n8C3q ҁD HG4-#w($iA:"b/8LP-`MTUJU&zN.+K/OXWCo*). !>$-LfG8ƭ哉 AF 0$!XʞZs$ZĀ"D*$h.!u1HnTid,FXNW)8cS,=>),Smz5nfiqnSRC[tK Wnn?z Vn6>wۮdruro67?x6 ?eRRIz $X 0%Kwғ_T2UEBojxSlfu]ν#?%y@~US:3Znd.WHFlXq2[1.-m$, PGfFXg6g^ZC >I8;8mߘZè.] mo< #  3,#$Hz @@$T{V0Ta3rlDv3rȡ mD3O0m7SA F(^2,BT>%U:BJ]٬xD:τB)1$0p8sx1& pysC QgYfyAm5)A48b sbv`]\7EG9翪WP.[ WPs7O>6p&?Ū,8T%d*w1< ~X? (4GqYh4Ch흓c%l]3wzWOtfkIU9P(o'W=1'7\ Q7hڔDy8c+YxYἸW3m/fߺ5(tp?=;_4dGΉ4ǴZaUӮ8h.M/p8'K(WյwZWmZïλˋIwbvCsbQ:c.8GxŴ:FiS'.nlU7tu7j -2ǘ4 >yN/g]_9<Nz9VJݽ^rU+ޕ {S=d$w,}O|~t.s9NU}؃jzoϛ5w=T"O>l8joW~x~7߼{zw#G߽o8qQ[A $ @=]wMyYL5f[wF|m١/1vX}{Qx3}PkvM|[m <ьEFyAeu;9Ɣ1˖rk!Bm/lAޠ{mI3!H2V{v#)!OlsP+=Z.~OȞNU8*@n~ӝ?'QH@Rq:6Z@uK Qv5JF+$SDտ"_>6@|9l)c&R. l&91X9ť;Ok`V)g?J= y⯙.g͟/޶ų;[H8Cm<O.DhCJw7h$>\%oFR;K0*͂39NQ.$aHYrYH]#g}Gzv=\]xI."hL{Ih{muv|XU|PwmC:k/5~,UMYݭڴdT{T{_y.ڃ!UBث92W!7VI=ngn>h-#֐YЮ0~ 4N` Xhtk >-&zقv&=h3oV>,X4*Kz,pU@Q k! +EeZx]ІHBb.*NwƄ|^}2?{Ʊd Ob=Vw7ql/khS$͡l+FV _DJcitW%X@*;8+)#r [ﮨ,NhXY \g9-`su#!KL˘0Kwk-Ei;cq9Q?SoC~y sZD)Z0laY;HtH D Q* =4cCIq􀉜FU8!o}9&r6v*֑aYqG\$ %RHDcjnjcD{>/JvF 8goy$B*ik.`h/Ѝϛʀc~9Gk\td(Di,1hH*;tPZz䡴{Z o;P^GPcO!o@O.Q X;d(2ϥ#e:>R&zXD44B~a;fu#r0WӋN0ާ;)`%NJĢa 4T#<FiM3th5Z`(3*h2:ڀmT2KIwx`QELM+x-zx2Mc[ɮgMwMQeH%C&5L痟t8w~0cSQDJUt:y(Ug9vq$av.j !緍E%f !D %P%11bxSk 3X}ĚG:A~5Ӽd%F&E%4Ñ -f12a_^O` MtkaќL~tr]kIC J_o?UrE[ȋW5ﯼ.2-xoŧ6;vZE+G\bl鵹'w؇.&ZкzԺ[osXntf EnͪChY@ˢwixoh&#cZnf7wvn]h-7ޒ 7zC|jt&(vAlE-|oႏpWyȨ`! N*EǸ.'qicC"U0sH8+C4W-?} ڷ'>n|yxM,GGK! 3jg>1M^1Tk%i;r/gB:G7z^[M~+` Ķ_zu6ů'PTx \>wpub'jOƺ=IݞԾ's8=V[9IZL׋Y%Χ&ʖfؖv;HPS!酤JOT3qaX,`W0 8k -qo6|aX48LH#+3WNi:4%'IJ,e Dl,iC{Yu$´mAGaͳCJ5F 83EX W`,:S׃pf~q̽;(K̽dɏ3!jʙRY$LRϥМ(+kPpԻʋ3u4D{ Mq,z̝vX-"ro0C*չ KcfMj:ze3?.,.,= °˛|>EԞ#Y9r:dTUj>^x-Mg-gm-`.HZI[@]t^JYy:/ut^@>";:3`)CNvbY'bcArhWSXFšh$6^ N[BxA)ı` paN%v9cI}w+ϝxKeĆ(0C:):#) "FZ#.,%.v'nCcZ)♯9̅њXf K'C^T¦;++RRoK(N4dW o*-HR>J혌Ƥ Do]@%몣;5ȼd`c K$Qc9E.1ɩ\f0C@Ε#gܓSDm%gP|$w XZy,=0W!}*aIZF<,I)XG ÄF 9Žw?'QaZ|8V0#*(`FB 6;˜oTDuI{2_e)se=ٟѥ-m+|d70|P*E9=!ft $b&i0)2Lg>PiOJ]d&$.'RJRΏՕk'1zc%\!.~l* -n\b.ϖy.o-Os{;{fݭ~Na.^e 4C,!rlYU;ZA>GdT`LP>A3+O/?? ~MAFBB,ZXJL!0"~ ίyx(_ *xsV;GL{(A=7)kD wHvÒE*"ا36:1IJI7Dd%0l\" :q2: |b_Zj'jw,:2vrQ w^ƗSWOucxD* r {1:aENHcQ!o8Jy*l_n%ƬSE罂bWN(q@qO7Vd|sI\}29V`v쾹$%oĎ^Dz&sc̱b:b ?SH)EВL[bxЄtǝ[n*O7,Ãte]@ր e kJ; p(hRVY羒 ?+T`D@(Pf㺃Nϴ"<k~'}bcpS*Lja"a!@cQ2l 'S!PcRe$aR/I*xd!R&R/5eDDL ` Xy$RD3dg5OF߮MK~%1g%tnsBS sE;Q}mGԟ?܉Rw =Q;ˑT01ӨRdeF8FHsƝb#(K 4Q\J#b$i`K ](EP5sΞ^؁|}tkLZlo(17_0!x@B%R2PA*+e`D˥(`WB!U@u'MS=b$Rذ Th`AV}UKfWuZ]G6r[v}]@hBfK9xZM]L[ӧLiP!8M1 `fxEFLY,"RbߑFN]E={Z I[FWh -E 2UgUUn`R}BR0Bp`̮[|^.8S!WôzÈ/›+;^WKqvޛ*FC*h ?K_MRXӖ{YxBwt}9POU54}aR R)X T*A5\I1m\w!4RS{D*KpX|[XӬ%P,)61жx9ϷT: Kqiۛk7^M:{~e?6(/߾?a|5 C#Ѭ?w,>u2h07/Vٟ*_]rVvTS#-]wp!ֽGa_/z嫥N1i "zo.뷯+V6i'^X_mA:orn ^wdM 71kܧb#,|SO3u5۝-tEy;VC[\v {kĔkȹVC}NP[lqdנ.} oݦ_X?o/,Bc.n1fZo -}9ܓ&xh{"aeVۧ<oKgx>^DU8 _4s4_Xuq繓Q*HBt}}?˥ԳϢ,Cw7e(-r.*%Ӛ o x YP-dqŗs4gB1G2<sxNdmcϜ '!w¹a$ZI,X W)-8#5C1tJ/K3 ӫ=_~Wgy,G;:Wӥne8](*Õ:b+la(.rKsZVb6N5ʄG9gH6'oVpgI@*RNHtȱ%]3Tcɒ? KT*F{Z!2̸DJHHBY> f2(:PZ30鵌TMVHKDfMS7cc~$Le0Xa1d*`,%uA":i(,*jXf:}Ka1$5b+ j]{oG*YnGCs`7Ae ZY"J_א4ՔI3UuvyS HBB CK7AeAFzARYrFj|-JQݶdt {X̎n~'ē$kvkڑZ=v?]U+?* @ -GR (Kf&!y^b`8Ϣ0ri?R\ApMGeFε TmXRMVmu; w\˸9ٞ&._ar>^M8?zO3RC )$pRA$, <\rֺrC ƖNfUL#{! &# ڔƂQѨdO6cg<*o2`fpMɣ5v5rk8s_v5Z`5IQDv\ |/8qF;nw~|Gc؍^;43KO>9E@HHV(pHFI 5>ygK@H<[ V+@ȭhA $JPoN:!@c-Ȑjmdɺ%f(Qe>=EIp>~:xle&$ +2Q$ )1<}&G6Z{ں=$i{ʇNY=e-._P=o3aWcܛƟ^ U@*QƐy( BqKRR>{ G4 }Xzw@Ƌle;*]Q[/qvH7AšX}6]]RmƀL ʔB'':D˔ 9bJ)A|&y{0|SU<MfEFEm*5GDFH1BFhDjtvN!@04W؊Vp{TU:v<m:$]'Ʉ` -*,Q,E`BI8Vc&Nhky@B$S2:*\RI(EyNQg3,yӹhFFOhhë`^l=m(r;2 7$,F W,ܨs*k1b4:R-86yڒ-ȃʹu7x.,kl(LEe&ZfP*ܒD` A@#u se(bSHg =[X7a!9qyRiTop> \!mC1^H'\|?=˵ o&-bqqZ11s&k=~KOisʲ/_'^Y[=%fxW9iŬTǹ5_#(C6+jpu55?AO>~w_߿o~|?r>Ϸ߿{C;Οh&)uz4 ?<nAͧ4|jSKQg |yy[0ۇŖ z^fꏡ(^rв5 U Gf g[\N9`6DK_R%_\V8@0;%޶TByMGH|I/#% I%HM֓1Oʢ{rdӚFzjg\5='-Pdb~T&&,Dˇ Pʆ-1B.QÙN3pT'+w8fB۞}6׫wt y:W@P),fdRș54L{$W`8 d ăBXF8=<>DPj"j37IˢCC%nHvB qcFbO:̿*_ɵm7{κd) K2Z9#<9M#_:e(p|B]l<4ZNrAkȊMDhVNsq(hrSr%n3ѣs~ey0ў̈́O " VH+ RˊsKGVy AρgdGJП zJL,T-!Xw-r6W2WH{9=l&=nO\fZ@OYg~ tEf]`=u$ nQrRg2K: &Q:(F"-^ߣ?D1SDM"d&Q)Mzi]-X< nhDY(CmuKKR>`bYEgMV(A0{dA/ae2׮]&]-O&іnWZ͘8r# kL}  zx<>dhi_:rK YeK}sR !7%' J Mlش|M;Պ( l+Z[P (1Imd`3j ˥#1PZ`wFvSYVn?CSk$R2]̈ˮwJom7&\MΦl瑫ڠ;%8-`-S9(&*DSjDJ!Rc.:0P7<9AϹ>R^Cpm:%vܦ^G7z;:sL?\6DM-W.OG>,&$Wmr6Ƣnk _n(*~~"c&7[Y%XEQOd̥b%߹Ԛ|PKb'4:aD"#?nnڄhQ%?%?fQBY&]svQ1SȲ!S=b)w55 Yu+݆bάϾ/[8LFm1A!6mc4upGc[7p77O#,Z !K:1+IItĖ2Xz2'G6/u =ݾy>zxZx͠gիHO6d3͇]F^>~Y Fj~(wWh ϓB,k֧?f'ba^`4 ,t̓z ')Z8:~0{VԍljɄ&z?{׶׭\xl4@Uu^rN\J!~Lp=-˩ʿga͛6V%3؍kac5FQ5 o/ݽ׳4Nθǝle'rMgoWu?*%^'ܥ={s/mߠ ?סz) h X. ,pMV'[t^>տُ^A{f† +TH*9o0^3 06r[\8HN&ʮS%&%Vm )qL hF{x$W [_`iSK&S}-Yd#^|xmbٝe(i潜$FG tyc%P6-ӖҮK 6-1߳J'ELI/14y.$1"&'WI;\ǜL+эʮN-+1'֣O1S\-N:_ʉ}C1Zcpm$,.@瓳~|c:6,Pl%UጧJYA\قJI`%R+IuZs XJGe7\mN@h5Jhs*"]_ᨬxGe#yt@$QY >7ؔh]mT#D VIkFZpbִ=KFjHN43dzm-F`83$/ZZZ;/L{&/ [N[Ćrɡko6PjwjU`* NJ犬wܟ)!͔gU.Tml-Ä2 Vё&V\Bюum 76P/{\flz Ş ls1v|kGzIOi4ۯJ@<0XW dU8YJ9Gޕz3znM6̐!;9v<[]{Ё\xb\Jˁټ{4 C|Z_͵mA}|jX t9h7\pҰkTS2|Mn&ջ޴usνG9ﭼׯn8v2ôYz4ֹ.unD+tֹqk~9VF_IYp W#J^"\јPW~/f9HũN&Lrw`7{e?wwv/Mv!Xsbh߫J~ӯYkj~>|J:t6de܁?I1ނZ~#&~z5GK.eUdyk2tbw'3Qq@FVzpPov5׺?Gc\Lg\ܻz=L Үm4 `,STN?Uc?8>%ʍ)}A{uxX6q\dv+>]I坺m`RUXL#@Qż\GyܬOyx@xZX?֭۬j}Z!Ƨ}ڒcIY{]Mrp&7.P h4q~붼po?.Nd}e3v}D{2wݿ.oNjx.`{bRUŞq;[%9dOx5`nד>?,jsL9S5 a)Lz%\mvV.By½4Ez2R VMEB*iAcO6uI2ync@b֌VZK!Cm41G*˵f(17J.A bhSuD.o j)zl]Qv2y7s&fLWŜDj9{t_5xLfnm d38f%(b{ ӢS2Aឌ_X>~w{[]dv1c`͍7C6ي06YʹVa fC.$ĘKÍƬ.ޣ]K5_ n51RѠQ7F&}x:vM @֌,ϣ{`N>ĜYKhK’Zp\ Ƭ*$TR=LC))*N!RNP?#F݌qM<,JctDȝD9N/)&qu~ƎnH/&-Eފ5]Oܭb>Q j],TI Ɛ&4#iPP"@0Q XRs`1uU4 C˘ ͳ# ^1\v- U  e񂢴nTZ{f͎9TmA_jGU0(N*%'[REfBї3!]-2'6\[@6$p( mtXD.fkt5XrqeiR s| d [3C@z#  vn s8V`&\4SAI!12@=9̼bV3>UlcPH/gNA4PY35MEdCAjcA[BFҤe"x' .[J"hF1#/M5:V:+1 kSt"&(+0ora-LoO]3_e]"@-̤QK -K8`[0uJ蔳ҕjJRAc*$O (v5Aū !0A9JP$rDMk f8rkHQt9Xܣuq`4Y`Ms֕1(2pGi]BɪLt[ӠKMw4@6Z CHܩ`G@N@/V9І(TF.h h705b*D"=UiLE%%(Zή$cw22 jBbB"5 ̃ErFP86\GҒ߈3B 25kNEHUAbc/qUcn|Hjot#X+, N@5h2+ioPʵr'?UӀ{Y"dwݭZ**w(a5N 1|sP_.GJpE`f<U?J)`竸s]ޟn]û1va(G` ^0wo] 4sɸT6 ݵP)٣8"aMŎg)*^y" ]By:a?, oZfq`BVr%5 *X2$0eR d[4J{P #Xշnb4*'G+E$FIvbfAZXr=JQ ah)'YT1A:΍=t#ZKUOTu`\3Ǻ:ɶT}TӢN|Ns37^fj Zh#S{/lLPb 2HR\9'Ihyd#uנbFc!4=^ qgQk /k@(DW``#0`]iP/zV&; N F9@ZPHUŨ "0 JrlCɵa+$[ C>PFl)ʝ˥QG`XrAuFl\ ^E!.-Ck\0p$* 9P x<]q;1!kwUrkl_^3B'l0f܂@>fsN[8}O{Ov\~z7B Wd3(زk|ّx7{3Zx΃-,-,tUjIBt_ QԿ1FEnz\_ (-< $ d'#s?\M't'9y @04 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@ 4@/ A<&'>'h@;%zN KNNNNNNNε7KaMyx? $ Henk#R+gtKE(*I&)ŷI MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 4$B"`r@b@DkMI tI9&u4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@o(.IIIr@PLhcI LQ@sLe|$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mih>ISO'ُgԔK[^?/^PT~E:> }()D ..1Lgf_}p(!hpiwg3T+Mbr$\bJTW3ԕ5bQ "i]]WLjr銀C1"7x$]1)+殎q gۆ9JAG{N֣\3_77tCt~mfg3#\VHi=>cʨOsE$M<1p+bɉBOLtu'i@<9F+u"uŔGmVu5]eHG+bx2-uEe9 銁Q7+u^>];V}^6N"~\WenESeq2T`0*[$]#HW]1mjwy,dA&'FWVAb'oa ]1p+wfRtŴծ+<+7t3'h2g` &h m-(H:M3n)&OfʨYj944QWWaW⤿lu`/ 7 ZNRAx_uo|m!l=>hA{}c?y)O'S"shsFI׼+ (EWLrbʣy|t5KʙDf\96uŔsUvC ]1p3ʸSBMLiuxAb'>pwqMM/) ^ޱqi|v2`z0X0q戴Mtk2J UWzpz]nz΃tŴkQfkUW3ԕ D'HW ފN}'"ZSbJD u5>+HW]13dZ_2`pM K`~ #*K UoRT ZkwG5odRdlSbhkFm(eCmx )nx#9vNHqzj1et:H," Ue+JѢu)n򠺚bLYT*l+E'EWD ESDifX/HW q}+/bʬr ]I7(EWDM.0%%]0+ƍ(EWL]Ȫ+rnU$4%٘4` ܴw)f+e :sw]_O! dx]5v]1eeWUWO+$I"`]3NAbJM` ]1Jp+FWLLbTW3ԕQ[Xaepilr`o 9{t`ֲAJ{%G@I4H~Gjb^LpL" ~Q&tYFpa&WF5G]EQ"F;12D'2]QW؄zW#Nu'__p]1Ѿ u%]']n1"]%ltwzX0ɢtUSJpLA!DRzWP+P]ݷ9 QxxIWhM)}R]PW@+'] ]Fޕu ]pHY7 :˵(=`p`VV/InRRnb 4-_iN6V?fʄ9js4T~6+7'݁IN'eyy3DR6C+:NE!լ?SkVΦab Π5F4QbF44!Z:>!ʔhf84+F9"\1ֻuŔ1fruYbtŸHuŔ:_J00&1B+ ).|teXvi<4"` iB/<`L잼@WVuuߪH&کEa Z(ɪʆl* x_nRtŴjS+!+HW&_wU QW+ Iu5C]ykoV`a˞=Z[v6 7hw\ƽ+m˸kܠAF| 1R D}Do189bfe6uŔɨf5tp]1 ),u銀q}+7<~Qu5C]唃t qhuŔAsfQT;GbtŸIӢ]WDyhWu%]WU&{nr]pe8T{(*vrV=+O8(FW}+E rTW3ԕExOW|/"`vEjrtEd1b\'wŴSmTRFY~R'ѕ&TTh5bgs~Rv}[wɮ`"7(k9WEKYaYRɡ]}u"Jkru\z仌Z3uA"ʠK?#];V_b~x]FpĽ20M(sekS}U6h銀仌qVv`eȪ:k+vo,S]7v]1U]QWE ]0x9+ƍQvQZp[`XdXpYf9wk܎=ַ] B\A˔4H! 7p,^$2ӦP{(sm$mBI"]1.D!ծ+t^u5C]Ek}Tp$FWkL"3~V)^G@+7əDO,FWDԮ+Du2Z]1nrRtŴX`(3+ 1/FW rtE.W+ ^u*ܱDplW"']PZ2 tTWz!$]pA7)"Z]WLiUWsԕKf1"]m\l2 u咃 tE΢]1n0RtŴ SvePu$ւvch- m./ir&/)(sƣe -G#4kKNQr6)ii땳HQ%h%W{Px f~q(@u_-(;]ٿFWv%k:/DsX/oųx:ۖ忯v\ ]yurku`a[7߹{ =ͣ[c`Cآt..ys4ty4dW P``im7xЁIqlC3ڱwhO.Gӵ6Y}}wz|b7ye{9 ?f={Io;.!%ٿ'ygP/{v:њq 1!ҩ%mC ұH Ѯip=Jd@ bt7#t;^:9ko^t d[0^;DGWuImm ?kfv҃vzxvM~q 'yzfY{6[lbVI==9fbW_AmNl{}<|Yo}Sn>b4}ش4?0y>?xps%u~}|o=\?j_ǣ7Qɮ'&_ů pWҳb7/vMjɍ?@ZYXy&?cMTawhB| &Gs yY7P/&4ݘۦ>{ܿ]oNWT Ƌ+AÔG8sY@uJ^S=d6mtaDoSk4 }~ 4feladvjFQz!@K]nLq&= $R]Sh8ީyWH9\x1LJ.pw ~=ld'y_D|,/c"7XlSLJvF̿%R&*)nmV _6/Ko>8^sb|"eNO6KIɋ.O3Aol *Y7?aZP .:::r|ǩF/_ ,%! (Ҋ9"s!]cwK4MiQcp%:N$R->BcG)R-,"2&#dt*&ə2VY/2=Mi2lVkCRy%ʠea `6 doLKs3!ofdt7fbOVNfW)9]1-PIf ^٢))QKRe)`M m_D^>؞_oxl !Ft*<1Fo_( ;ioIێ瑕iCi)y<~2_ooU?%Qlȭ<總p-X2)Eak>\ئj"ʨkpmpfp( i<u}n.VJ9*V"f=ޭ3+q*{.+TOUmuaϥL;=.;t򶓈A.t9j?ື:| )| )| )b!w%A>ćd"")h SƀTʄ|(Y1ffx`I$)DH`K:z^aHX3qGž{>eHYvx7cͼ GjxGՌ;[5HJx͋}}H:3oշi$Rxձ,vw>K`^bt RO?q;QQGNcHZĒkBS $dmٸQC`q^)ߐ+tAwۛyֳSrCo7mckpWEl9)mBUe)AbD&0mh]h<XtM <0I$9)MbFEILgv}><-Su [ȆQuɣwmfU"dĿRsr+׳ٻ/Ϗs2ssW?| .+#z/t+֮ZͱSj(gGwhg|Xg' ?D"b'Μ\"UsySJw !wV0C\6@l0U7*ŏͦJU߻ٰjf XIpxp-7jhET7Σ/*QιE`ژ%d"ܛd m HH(U.@ *!23J"eYIo/BiIa>)WKm|6|,DADa 5 ф%0i% 4ۛ(ug"sT B<:ÙF KJ^'ؔ@X z Ad)[Fe QG)Y?M*;9y&#;@I^`PVl_P[muX4,N8 Bp%X׈:Qxd28^^ꗔG06Fd !h3c@B7fg0vcw+R[L!!1z> ~݉&m@[M6݋^X x>kK)ĿTouEzևo>B KZMtIiZ,kÏ^.:R#Rs2t#x98gQAz ř @u';o T&ڠ(_Eqf3q=Bjtyw}+l۽..l\~Oz&z ꣞m .ÀWτW Nܤ*z ymVgC15SQw&Eirԡ>cu!X0%ղwU^W)DNԒ|'櫒mdI e^GlfUnvf|q@[F9'>l`Sε T-?"S֩c5Qe$ϖwk֊K,[T ?%F(D 62sN`jlot>Ąai>oԇΫxhwjq>Ԅ^yiN4$Xk URM.H e7Y"\۶L1:"fR T`.2G,] Fb'36瀢fX/lB|k+2,f?ga懿8 ͏A"/W928F)JJI 9 b|miBEB#&{!K" T%I1Ij_T v֩9$P̾xf2Kkh4CQ^c4LP2(2A0s(Z`BSHƺ403dFE'E2,e>?1"1..\LpNkatT&xfqG\ʹhse*B ^K# sZ D R4v[#afn\V:PiID[TI[puf? ~qlݧXg3-9/B[xl?MSX($!2.2P`ˏ)`PO{lCq.d KMؕ7d?)-S48 | V؏28J+,:i083 l>%W0)>;kNݗ[lK S<3~Hc/@j>M޶{ա[޶QաGա,H#P's ^ļaHȇ"BA@2JaiCb@z+fQ,l=M}s 8mOcH/6Q:dqkt#DO-VQycE"IV֚IQq*je Ԕ46@!8_5SVmw9vD\j}8>D% m$`ta% !nQLF:aoD±\cS2j+'*|2s[n>~zb;Yqxj;OxS/r:՟LUM*,N' UJAyԾ LDLNBtdhG.k~2vݔh-r] .WIñXsuL[BP3U$`PyZ%5s =I>:\NO 5Nh s& #-JsݬFhJw0JYiDc]2weSgBGwBأB8LV6R \\:;vI+}dCY50yGo: h|;yײvpϋΫzDM'*t_}^QUj(hf7Cp KL?i~>ԓ9Igt(J!! 6ɣsL P27z!kq1xZIY$Yt*Xva\])#$fecĜ\mD)E9} zT$ezk:A2RN)(A:>) i@8 ֐)Cޕu+r`-=Ju/8Oh'#L;CvnzfsȪsT7=mgv"vCy:ɓ4; t `s bM ۽uoÎQp)8bw&;^t kihn*v)iՀߢ]v/-lMbQz4yϿ.(^ōGhe@`O| tG䙫(q*܅/ DtOpd|P\D}Q2ˆjJ47k#:)JE)S)K 2?C6/ ;pnC쭠ye#(vǯ>YvmSeS_ͧ^kol}͗+F7p fˉAFR+ҽHH"sgh@tkyTe[6JQ e6:-e[:{*5;fƠxa,IrH fI"9F) tZ,GbSڦ =Շz9`T[m]^hb{鈴 ;x+;5Sϝ {v#pӨR$7{c/Z$ {k% ;-b;` nɞvYr O1xK;] nqmv@TWK(] $ LQ"%VQ85\ -($Z.Dq2 ѨBl<O8:{Y ={ސs04BDAd+άPZ:\E&jr)*>\E-&qj-*2ePWbWN=AsCp v \%jvJTR'W)9'G;W@tg*Gm UVmDS+)x Ə~3 \%jpӄ+B \X_F\dW |*Vn>vrp0({'e9m#/٧/eA40]¸IFh6"Brdq8Ɠ@BuJK}1'q~znw%ꪻ1-΢%o3~ -(P4?I"f0FDE/"DINbR0+H"!A GUw2(:b-k5f,`ZFL&Z ih8]c2Jަptr~cwGO#HSDňSDƠ°TEp.( KbCPVEх@ XQ+r>`.0 S{+%R97/pԨְ?N9Klg-\nKdӀပ&rx9wAIvV'[AOa7/j*i5k/ |$rT9f9q j6Dr )f-MI #E4lPd>Z委S\-&@] j0D1rnFdư1  w 'I2^|X$S,6ݨ/@tN># VSoARq)sD&p啔ʠh] R;kbM,FO@IItY$R:"an{~Fn;$f0IVZY1G:[m"mW.n[g_S`!JE¡$ D%>8ׄ.;ۢm!kYD#'5=QkA${A\Wm]!4BhO~TJ-lJ ˘Z Q3L(9jȐ6ZS:b&j M ${i-rQ VsA $8}Ĝ1l@ drPH&u ('pڪf<&LbW%Ti-L%ʬbN j"QD^֬[Ά7ޅf~%پreiE7VD_ڬVP౹9z9xyBDtȭxͽeԓc6|,Ac uî]֌Fd#ꎐ*m:Wk&ZPuaFi߹7[7(nfQIjV*@izooWe;U; =Džt琖WoL[nqkZZ\.Xy1]GkB98 LDxdg2PDxp+Dp-HN"=q+h/5K66q,2q ֒ At҆ 'Bk492LZ[{1cEPH0IĬ浙ugEQJdΨ@hs_5JlF:#饶MIc>11q\x|8 ȃ5P1RM\b'*)UQaNlT1s+(:cl%"U 9ކl.\f4iutIwY&uǸW)ƗL U Nr-eܰdܧ {v Ƒb0t;fۇĤJʑ\=U0y@ 2mOy.> =ٻ=፣2~C6R,QG٨SK$7U]TdqLh1X=u*&E]_ 9 w}x˛뇳79̜{opn G SD<{ ??kT?c w?m>4m MFl04gm>u+H0c| l˭Oo(-]q/+\Y\b?i#_MFEf򬸵s W!fh&GP UM UQ|⣇ē0)F#E8yEeu;P+kkϗ*iA1RRa66LE^Yqϟh^| XB|d! !xč*EM! 0i$! rBv>V|:#^MhbT?oTYzBv^%l2Nj=lgK9^yx0HRp4DRrPz\(a?;"Pi@ tmzVϳcIQ#tpc iֆLu>2[Kx ;;QΉR7 z5xszLp&M_t ])_8,'1U \6碪AyIPy鑉] !k阷歵/w~p{i#Y>8B+nT'lpPJ8j%,J8%)RӝcbϝpGyVI_92I/V Nbieȇ$X&21]xr%6۳g3hA!qP6"|4X =hSc!!O>Dzߡ\N-_/$?<9<*.`P]VPG?GE׸ȕ.;Y^6  ޤxn}02iWA'#^FR2Lâ Q M6xI9ґ'KI!'hx΋N"D"dZ^ί N0`@5AP Ȥm]>l$Ѩ0 D5* ](E+}'VKMC<gWGɇ`w{dXM0#[Z]Al lI_jCg}(Gp /\kfN R$+PHQn S$IżxM5cXMU'Ɵ.ƷI9Qw @R/. B|ى{0^Ox3AJXb%]rnr@5"S:XcsdZӧ !88ӻ'2Cn;6n{;ȜP/(\7por`Bx_p-7JȄMkz7 Or.OYK)&x}RU@_=/\DM95[o\*>_m|\Y0>m]7r|mVʽm6h>Dl)RXAج׈C.[/vlpuFK ֺEb~Zz1M|t}[t*G?kx2OjX8z6gY=iw'4'y3ð??}iB?|7:r׿ƙO'O3#\GÛ$UT.>ҼU tUvo\G' 띿3xDﯮ?ۨ-ޤs,?n-~C> znp{FOÞpy}ku97Vb7n 6FoWzJujY*a;B h̨% FtJ75FuO) z )n8y=1oc#G21 |~@S]Xudi$ڰ;tvm*i &G?o<sk}y!Ed*bBHƨ*zGch%q: ,a2|;H+f~j1yk(bu 3)`ӀM $'22z! N[AtB)0rr߶F[-㷬LWpy (͟k19fcNGڛ-&(J>9YJFs&Tυ!~|R[(i8IP`(jܭ,b^vS8,+OgK 0T!0-sNTk_j 7ٗvĻrwfdqs촣WҎJ]pC9W1lwqЖ2IK)MYY:!.g6=]n9wIgI ǜsI*0Q!)ITdrT\Y9 %I,S8LsxT{橳YF{o=Azh.(똑W /<~^0‘SYȎ[akm&`b7OgF7hjҺ;JKPԆiS4OWCzgkW=ͳzV=eCq3>h؇N.U*񥐶<kLIGK""[i# ^J%T.- ՗%y}}hdIrc-{zf ŗ4umÓc`.5B-:5{ !0j9'jʵA{FNw)EqF~(Y?Iaק߿ X0C>ݹ/Xv)#>uU|fMa1i4J.`^%)|M.~ +w UB5X $*کl.rjj25 C&khĽs>KUӽ/2#uйf(J1FᏕ S,F}+]҃ ӳ (J EmwBG|nP^N`}rldXL%1iɖ{?t4}ػ?^OFsʦ:ji715S0=MCѭˤ]"7d:{9v\c#WE%13k {B!8RLηlc`b Klq&ĒShc{JrscK^s̩FGfk@/Fn ފ~=8;s'oZzWZCdBe#ˏǸvczwǧ@4'_ĜTȅ;<F"=irh49|-GphS ;SWiS+%sV VcgDIUB\[ hp)V +Us)ztrR.' \=#gI1h-q| _l04%.veGIlJBJ,mI돶<c~5ozzTNЖ1(NgT+^?^^ЍqzϿiD;x*e~<w88!Y4=kW )yob:fhK7rv~s͖5@{9a( Ssu`U(V@6E8}ZWޟC @  H.YA G*VD5;FlqP>zU˘5y|tr=Ű}s۫u|ؗz9_;ҳuOmpzu\dw=\*Z֝຋s_ctzjNr"91Y[?ҝ i΃}m3k{Z{n26/}_|7/1parosuk ?3g/D|>sU?-F47߼])>vϐ[[UVi{L3sz!`PCHI)E)SjNVY@C mU8@I mh6bdk[DK)8GҲTj8)א%9IAD%kk1O vs~d9Qy~v|5g5`kq)sPQzmkr kӔU"h?t._np͗d4%wucrJpa UhQ'mOg r z ߒP0mLBݔ l$6zLRY[14v49v5w_IN/*W7c=%@V\J'N`7[cLl5AB 1tltla^Ώ{ȏ-hd!B@bONף,>˛g$]>dXR7sLS|ilϽeCMլ |w,Zp]mѳM9-E3b.Zɱ~8>ػBi"_:z!e=LjS !#:ߎTD0q#8e 2 lň9@1== ObkK$=7=;Rq)"T]5f)kj 9hui8PZ,tB GGj끒i>}P^#/bt9&խ,._ky:SP7'c$M,O[aZ1pL63D)=*S4{3Svmdx"BɔLv^BcpK6M^jřg*EcF'JЩXkNrf_2gh y0]#ty-_.N7z uǽu[w0=i={:T] T `> 9« _ѐ/]z5C_ːA}5!_?/'0̕Pdz՘+PҰssP&S|)U#u:pHxuhk=&ٷ@nm6e.]v̦_w~}Q󏷞GcN>:]vCK![Eacit}pj>ϊ0bw&_ݽ%|ۏ=+ݯUsyc߬W& QԶkū>FnV,G0EzCʇs{?-3z4u*k_ڟf͏)wl7tm9|z-5;g}Bt5{P(3 3Î8snΘ_Mu~ᯥ/#ev7ǟ>mlϤ$&n,읭]lcSڮ'?"ɖ\bs"RVLz%RVgVm[lavf?M4Hn*Œsf6a6wSH ynU`S$c#%]VjRrg''Odwtop7LލӐ '߈s3h9{<ح wאW^ +i1u-9 UxH!a빾A:dvXcS1̆{go[ialTJVhc 1mhhwkYl5!1JBL| ;n`F|2hO8kd%f%a] u*2C,S=QBH$*%1tfͭ;g\fm]#l i $:t[JqI{$m&JKD_B([/SuMICЌd] swR`1 "r%r u b =7Bޥ6xZo'XƣrB/ шZ%7GlR.I,Ko1x6W 5` `ь/u`nC /`Vpm P љˣ}ZWOǴGgd?:vU ,n{r X3Ӗb`AC߯$ߝ7K% ljdzٮy& ܮH8W7- 384C,hq*%Nᩀz`,#l5xigaO(ZR<@GԷ8U׉ZW,H 7i[ÛlNbv#lmj43G1H34Qy,>")}QĊ+.4L[]u6GV<|a|;B[(@Hʤ4QAUЫg=kTݦи ԏ>xɠ8ߪ)dS:(02&h=cix  +V kslUg}eUeRW0@](̎LI-!HJWC <)@aH0KN{P]pwE{3,3kj`ۆza ~5X▃d@ 94ˡjɰR{s!gq>,F.E|PJ@gcCml$_1E,N\d-"x q&w=z&H]%&a)έS"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b=Y&=]:6 ŕnW@( R:FL'x,0vpQvVǃp khn`vaBeLhyd e>FI(>fxe{ž匃Z.NFNCYc92?<4> 9}Pf?13p/>Jc Z8I4Ck]OF^wwfV.:%sJk? 3ϲiw8_-kj^aI`=+b L C hj 7k0'?pKO|-Ʈ뗿ˊ'UِG\+sP'S.:Qm.]X*բg?&nBϷo^AUu`oQbE!mW̔RCY*z냮B`6D&FM@/!" ߤ إ\iw_`07B?[ם}=?`8Xݞop~/^_Z4綞\ Q&R@VrN)Q1$ݔUrR%U* Vh#nRoKZ/@yKYǠQ̳l|ץԌamo{n}NNÑxއ_.|PUbRO9X,hV @\5xKxPvwtɐM{l=; ic~Vy`oy&jPi7.дR.7B)syEuB"S+3G 4Tןzdɯ*gtU޽A`|! C%^/rwheYt=It]hk;ٹl%\\G|phQ2!ƌ&(u<&eD0,"m5٪s!t.B}`qP)_T za< 4]N"(4n\za/_0k../܈+52gg{Y[P\t%QnaJ y쎖ܭ㭛Y+\~ μȣ\ᜪt:_'%ET*L*l,J(ނi*}Ø{ ׎G!mcPZ,ۭh먿̸ Ri ")B<`1C.pb1 c)?GSӷH^#_N(肹Y0HGs4W\EӪ(LyYi l4 1oι"ȗV*lZg #Wh5\{Aof }Hd?I[oο*z'(%;O3kyRB}y=aV9twىGӔ.|X1ִgۋd(*.Ra^Vp`rG$rxl?73,y%1mQrKOLKW Bu161jPODs4o7S<皴9M_A~_my5 t&g򥜃:*ŜIoz ӤGdOߧ0OF 3YGٹojΚǑq?%_;?7^L''e0U̅9_xҴ,m8TCPyRblEMJ5)}uMg?hYY? Z[gAuEݳ亏sre]_#Wպ[J-j0iGQ.=lPe3ʺ_.P:)ong;<u×?߯tջc.~z3?&)Bݐ6׍v.UW-ԶfUK֨Zm~ukԫ6?yEOa{)@Re?OǗq寇a/Dh$`/f&LBf4xO .莳QKEdQ,t/B8 GVuDFx_uKmz;oZ5q+߫wOG:0'%"SCF[1`Rڢ/{Le} iӺIO>ҷ\lA^^?Z뷋3 ύs>yjyu׼3;ݩ]rOGzœKOCB ~pJ%Y؀%Y๔JӞVti2nuU{uĹu*7Y51 xmbdgC68ě!~趈P7Z$=PڬXwָҬS>^@"e6[R +;s-J+x;f\8K)|n8J&GhF˵(kƆquҺ#r Mqt6 ˮjB_*GTy-B.pUD`FFW 脬xڭI8\qfĹLL W˝- 7j7987˿z5J#REC5j-x\ҬF]O͘.׫RDT ãѵ ,Wm3L)J%0}( CEa ޭ z;I_S/@uVGnX"0gbko8ǭ{obCwx4킴Ӭ\3OF^?TPy%ȕd1/Vy@NL~&~{w'b7,iZu8{ahj֬5kBvŇwڶamh+mF[0^d:\'i 3p/ր^ 㭠Igxg1իYvmh݋s,~͛K6\%N>n̞Ӻm5\ZFRNdR=Iۥ4/ y𜟷LGҼ<4/\9e[: 16a쓧w?{ȩQ7n^Nq.=w ihxxE>Qf-+Ko+ȕ$dzrvL#tqghWE(5c"1WPj/#f"DD@]n՘0@p[.Q, #ݑl p̈́ъ$xpϲ߂@[ 藱$p+Oczi}K6"cDp7o+6! <|^2JxږΖHB >P?^Gxhȱt&]+҃_Xq,s|-QW2*$\46eG3z޴X1f׵?a81tP2u˛o!m8E|<i7MG =%o\rrkvXLaf_9V1ZnR٣A8e4ݖ gbw>:cUS#"g놯Ko<`c#~A3?|]'dq:¨ κ:tYsɾpZ&K_U}1n;#05[wRU'G٥B`]Bj%y!MA:hy}D̚ZsR"FGr\sn=Yg|!(~څnm.$X~yKv4  7//3?x#ϟin?}vTi34\EW9_;έ0+g/`0ip5[:@™`c~N Tٻ6r$W|NNVn}ٹ`K#e$;a[e%Yl9qzySzmCcmxǭqogh=*Ӵ>/OK>yktPiFFʖ8mǻYӻ٫w{`rޭګ?(?s -{-B>kv#F݌Tr6nSz6<UG0 wTTPj Okv%XR1ԥvitu9|_#կEQ^=)P]=RoFOZi{ R¤#6.7[9k|wS6*Y5)),N Acsqrʊ⚂O{H}bőe*e`~qvdۧ8HԯVjwj6]st9>Ͽ%پE ϧKÁ[ޫD;:6q?{,@mMh(zˍ .vȢ*jx1HP4* Vf1j|JK,Bp9Z0E`|20F*e-,3VHr awՀצS2: :)Q%+3\Ƣ0jy|>5yx垺ў@32Òsʐ\!uLC2 xoԬY"27B*ŭwo1yxH II`h Y\٭"nyOTPjyr(JQ=%Cm~}wXiN^;Y&.wn=zVyFD4 S6HS8JmH=\($4཭{(8c̈iR?W qA'1)\&R4H_ LFe&nX/VW28([!@xe.vd;k͎Cla&lǡW#wmإQɎ{$ُϔ {Rb=z?MU1Jǘ?UH \h$E55A♃gQI!!#%B$!D;Hd@&P i]bI m3Ĭ4Kcqc9G?.7Eq_/x:xle9Ir\ sA)ͳ'gvj-uron?i6m :TU6A捡QbME.)H#R=R;B->T叨Y&w@Ƌ0*B@-xU-¸wK=mxq(V]]֘TsOLge PEYb=ZϘV2T]17[>O|Lj4{n[@;Bg0IW'{̿w>+Ja6o;F>/^?xE~5U!_|i(w>~V]flߕCW8q;'w?U\We8)0I< Rѳv?_zۅA5D$ҵM7]ۇvG7ȫa2k+jmòN~>jC1']o+< /jyuҥAw.Eg| &Iny5oL2 m[뫬7'4'׳MM |_pXt6sF;Tf e#I܀wQ J>r&mZٛQA )Q~,L `DUQqAѣj# bH.Gs0:`(ijl5KԌLvYqZg H 2UHץcNhKAgJ\4pJ;PJ Q,ŝ-ڇVʚPmM<4\hywTq^I BFS9H`k(x9a*΢hr<%X0(gXt]Dw1\С5yoY\Oè,,fApW1Α3kס'LAdP2B߃>1'\,{"BGe9,ۋ?ҸoZiNQ!#)i37eph#nHwJ^Qruÿ*_Iih7ٝSIQkxW.x×O?ew0C2eyf\{9˔8޶%)bv=!i?8J44GSI2&q4,tah/}$~huHjp]P:}2!zmq<$t: (S@.]Kge~xPpi7Ի&+&̮.bk1M.f ׳4\R,O ^O&ߦAnm}zݻt (sx {X<,=a xҽ~~.Lb:9q5oŊTٱ=<=S眔p ϗe-O}9\.B\*iʓ43l# ^hCc\=UZ{4diiҁ"d0` g"%6 .0IPÙTEjͅ >EƒYqU6J X9 菰2k3fܒ綠>{q)mC }$SZgz i,4Ȇldje@KNҁsàﳢAGeG8hĎv,#DaRY9i _ ƄB{l 6|DN!|;ԨXY(Y\&Kr2q ]'rT9Qz,*h m61 +'X*rBr2Ek\}['p8"*.1\E&yf.s>4AvTgܶgU裲V4zg=MW5.a_{K!/]yR/fBt]j1*v~.^{Sӽlo>{ ۂȐڨA&!~}YQRl~o_{lOa9voVCv~ SaXmC*|C'A{u_R v%N8]S$* "k$ 2ېd|)E+ĿQB&}"ɟ>x^>+#!ϫ˵^v9nv(Kϥ՟^[94sW>dqkqN0˹>mq)92Y{v]䍐[E[HuCg_Vaw]zHݦ'4yBj>itwO9]tf29nͶCjPe6zi~ao>p˫wny9Oyc7fꎎӛ#'zz-GsWSs˦(sݴyҍ5f8x:̸1LJp,V}CnV̆-09al%3ҞڳIdc$wA{DbIkA 'Y&6,kȎq(d"Kdc1Z9m+M8ԉgRaI393)֋\/hu@F.q LoЦ|b~r2Z]v)ҲqJUy!MEsQF Fr#2ƐjL&OMinELmIʓYa^vsζ{!L6L}-)<~Ŗږ-9b[MWW.QS k `"AU>*:ꠅY/t2ߒ1rJSgOxdODS 2D7Z |)~{8^=μ,LcyLK)h=PC,3 Hޣ!(%k+[ie&PlZZh.P鍧:c\k\V<ѹ_:ƙEY]#trz9ѽrtߐg֭is6ǧ0U; =kALσWQǩqV5|Q/ZkEJ| 2̘!Wyү;Gђے_Fxx[NgYJ֡ !@ ɥE()sJi)=I&غ2peG (I!FC c`kNĹwIk /~_RJ,8ZPJ&\ Qh婍*% %O0dwu!>g c"Üp~PY8]Y{XȄQp M|ECjnBµxq4{n)¸@ RJ$ZyV?),T)6b3cQgQ[oVjÀ 46hr4s)xOGjF.x(ۉFp;TLD33"1YV`mҖA@ FimYr%I%bgjʴEٛϺ:Y:3N{gaL'qC'J~=^|kF(Z'8Q?mf {K}X<6cڈ-zW_mNNGEuhŎ#U+rڭq%q ׉s\>t j5ziݛ5;gyvW7l]øf a.A蟝Ϟ-7g 5TýׂS# y˶aX0~C'mam 様WAOc/nNQ}MrR,Q' Bϑl/zq՟_·,}r ڼ/\5g'Wbޱ5_Aq_.JZnJōYꞿ Cހ,H;>ybumT5c u4DGAp)HO4@GT8c<[&TRyHOl=cx qFYBBw72/6 xa@G Yt:әRk3g [G 1 {xΗwEuXU 'y/xէ_xbG*Mqf+E - sCHW2OADmw==LRT HQ[\B530x_yi3 U1v|. <2z˧fɮXccAGybIȮ\%hss+1=Nbuev2y%ݩ?n@oV&wķ1BNk(|[goOR %𢚎G?ͩz>*I HQ|ZAFÄ29j>H)SYMї]gVe)-y cRp9ڢ"nQ=@ZgβFw\\s=VCze "1A$Ef^p%NgNL0ǃRg@$bTm L%ʌbVh5T<&b:8v&ΖBcg}5\/J} ۃzt~N?ƕxxnlyA K7z-@sjgL͂A M;I@|^A@3B.0(!r0 m`L$F{ŒBUd|z0KȨaG eoɁ XӲWs2?ߎ'[m;4[-ժiv[dAQh 4^r+2 MdZ\҄' tcxy(Y*h"IB9y1 0b Rj)\\cdnE{Zйsb8r>:BQ"Z-U܇Dms} 9d6,dQE,Y|d{3zZmG! ubw#ԅi^&F 7 o r;@[c5;z'I*k k݌ ogzJږgxھAh8!zfO ` ~¶=]?ً|%y#!53҆ͿqÍV={bߞ ?z!aGMrm @,T~:jѨ쪓mQYkcrʈ9P].ppNY[޷KhRoem*~e[]&Zk=ܪǶ ֭!;t6Z;_>f wW&Ns7;߳Ӟ ! ?Zf6C5b$ @o;0yߛKR&)TUuqR+uӶɿoUkNGߠJi%4n6wm-g[8;7ϓi "5mAFy2ۿL?#{Dcޱ`rG<ޝ`4jfi"Ar#5VE;:6as'35ѓh_Ϸ 4w.v֝n.+n8G7z.yvgyfl^LEIk1\g ,kZHB1̹.vIG xL'a~ uA" 0l = <~RyND%cH*LETR=yZ+03NfaL`xe=!BWEL!);Q4ѹi(w[ "R3ḦH11L-j)z- //3kXjh28ivâքTU,w83!Ȭn}( `X&G΀]oGWr$#!ػKlH.% iq- I._uϐ$96%ʦ g8S]UzhRc@8 $08Uaa")WXP]PC>K4iy}{(Vj1>jV6.Vy>j>r[~Rg >RjJ5!* [ʌt,C5ŀPYm,/Gk ~*51TOp).}Ԗx@"PԋDPR5c1rkrX\R1T.|QpIQ&!7gs1&oo27{jsy9p9?s VSoRq)sT"GCJJeP.5ƦF{D-@ڤB:`^e) Ӂi]=ῳ.Fvm'cbb.QkƓEX(𵲄c8xL#璣Hohbʖ\Hf",d@(HW$> 'H !=Gz90|H9a6QI1s)۠rQ5Q#nx=Sጉ.2 F"k@V Le% 6QzHLzz'H )Xc33G+<9=+]Xj zq?$_g1.qY84_' 68m2k9b$R`BY0Hu2`%…kbY;CRXap*lKRTȭq 0#Lяb̗яmi2oR$MuMDXO_8?_t'-NR/7b&' 㻟S4F"O8?NԒ{vc7*ݜ"S&ri؁AWg9p9xWʑ}'M|7m | -:dIh{{|]NuCu^|*yy{h/X,jaf:5^P}Uy35iDUWy/^I2_ngpcu;.IMA/OjK/CЎ&Wm ة821W}UmAr5CZ#yzO6G-n{Ќ)n{-{FTXH,{#'VS* Q$SdǶ/hr@A ,7ݢE}yFI`NCSuAR˨'IBҴch" 5ly*ƥ*7f[TT?kLN*l]mjDcিdr +hcpDe>grtfF zj%ZЈkf!lPPJz<&U#wRfޏiEO`e{Qg;~TW-(& wK^kZ鄡6woi9`v9~ܶoy:Y[GH(򵖜N)bxaxńl־>3y[Yv=h1o[E%oWRjG05sHז[ sә&2zV bM.C{.R+nq]ەKվwZ+AVȆϖk`OxA{ļy)XE~o-l͵jiVϫM8i]oGɨ~akrw0#린s)e0(3ɿ|/&L̑_Y8$I N&`: :=̓D39A#u5?_^}:sӃ 0 _WYÊ毯oEַ.=Iߛr b^B?0WL_d.h, >n-0c}?5wuw*6AY_HBI_@ <"|9[ai*w+?[QmtOts~WM 7"OX<-A_hhqO&ڙSy\,8ҥzwM}򀻞Pt]+5{>|4XpYܺT;~+(_7,eU>_U}Qh"q*yTbR-^^}y/\;ҕ) z-#-9{C@`)t2P`[s *:}o ܇>4˥lň1ބɸAn, G-ǵ\K[B ByLb:;är|Gm pwceɬ0_A 8RX/JՃ79/u޾"^ ;!r::u-paq𸖗ǥƿ^\;^6{L,c05%a4UTsc%bD(_kAq|Mk9fVT]4.x:*UAcc fpႤ_k0<򜕹K1JXD5G|曏+ |1L9N{;WrKˡwkH]SP{,a!ADj.PH$K3@cLn@Ӱs4~zUԸ0pҗp]]=[K]W.B}6'!iU6Lr}݇~7kzn<0?psܧC ;>#RbGx˙\5e=wUUʖ_%K`ņUӶzګ.=K^P)DԖ/U RZ~c)/>pꩢhSGѪ'Mՠ@4UꄖN5$%l'8, L538)Pʍ"6EȰ, Ƹ#aRE_"D4.]>Z萢huip|{ a37ugoGK,TJ?./KR+ՔR 6>֩nd{ȷa^}^4^ UT@kK,j2gZ)ͻV,p6\-kyDFi2 ?i$9הKJV,oᓺ=5l%QU@%ʸ ۾,wys"`=Ik4xt'cxРM4c{DKyߑd8l jP1p SkpҊE8&I"e^w@CA x=ox<<΋u3t?g4?QV e+#m E2 Fj'%b0geqBPB QV .Z! /*NcX&PfTetۨQh@PELa .Fv ^>9N:v%:,XQ^z"7G}m^K)`3:R}Skb--`c+HɀEQ ֨JR}2(eZ30@豉 0%Sq3}cʴ̢oH`^`vZ;TCĈZ㙔t$ v;bM#T#;ꗛ"H lϓQry3m7lTn^K}?KS5ȷ)L;&DWk5SG <B&ѹA@,iiYߙ[l`' f'Ů~ː !6'paMYˠCDB+7J-Ȑd c"t$ۢE(wq==O꼺 ~pO;p@Λlf΃ȇ3"z{leb?v{;Χde&W0=ABwtw|y<\%=%;UK+9Dn$m6k̚͏NO}@#bغOwݛ9my~ˑ,&d$,(kcW9UNz{aKEAcEKR$i*m4Z ZU"I F}`ԖSM 6zBhH khlg4"Xnj\@ԎXZ0¡2"K_axgO|`_VDPIOT~MVJɩZYZl>beiً>YrX%p)ŒC;Œ%'5\fDWl ڡc<;,zc+Z4?9D! ZiR+Dى-t5BS+2+Z5|W]BWV=UFtP"|d)͈6&oh6thEt(,t5B3+u6tpm6thP~(-t5BLNsr )#SIL_Pƥ!` \ףܸFΦw@ ;rI[Y]x@K"#݀E4 ,h $#9:.++y.thut(.t5BR]!w\tp fѲȈRBW#+m8 49/˳q"#ZR+Dilҕ1;>] D"c,(Z¸ʈp9ѹMD/U/ OW} tă~h@EPN<=:ř K ]!\es+D֧߼5FHWrRWذl l zJ ˈg1m P ]"% %DL1/J~pyIrs(<~2zLu6H0~ 9AGӫ綗JN0>5+kKfth#$*%gZrJr-7[rRbɍВS>'?9L\Ky.thӧ+D)uҕ\]!`]\CX.thM(ASc+0ɉ03L\B"(DDҕm<3+1p &h CGV h'kt -jO~(SE+QԪLJ2+ɆV$yB!]1f6"Bث R+DXTI]n }tpυ4!] fh\VP>z!К0Qnκg's@82-A7qx@K)bl߆\h ]tu䆫h ]In2eo}j P + ]`]}-]++k>ZkR+%8e~tTRdDWZ(=BWVekteQdDWx=yD&ycQcpte!$'W;UBWV QЕ<QT'`FЮ^px5\/r(W{/Z&f t% ]ZL]`KU6tpvC+mt(.t5Bb=NC>]1M̆%S+@)M1NrRW|kt.thH#] fIYQxF^H!4 lCKz3ٱ)?WtF0-ȥAma-K9dg ͡$mn޿d;hS=?%~@h~|\-ϻ[-'jx/^w.||OWא_qƸoh;ߵyn{orV-ZCw$ 4*g1xv(__.o[-l:@^Ѓn|S&v{>ny rJ;y-65h`)U \88@h<30k͘@<Щ'=lp^O//u\=eۿ6X|OC-\P.(ݿjf< Z齚ED}۶|ej@{75T}Xf:Z]_[_&m!/vNT`܁qt-7X0]ɷP?z}bL_N ںPŒ`1,gSY=.v/M:7hxJ?#V(a[zO*3.ٸ)SAxemt6rY_~g= 0^\%#UM"&z}XrLr)SOr~1B(i֑&J6ԵQDr+6$ڙ(Jj2Yudd6׍wumxdž;+ALYw #VkXn_'jhз`$/"zk=jb׮ZKv_Ofl>[]='<?Ϳ2%d/n?4U~z ϋ :3^Nr ˫pm/K.QA [2^y 6zAC[DueUAz8?OzP<]AԋUi,67/^|jbAԷrh+{c+~1~u•[o|M`M:7CIA$Y`v(Z~- ~jq^ul l@u܄fֆl$P[?x*܇m6vLmMqzَ m4_/?.f,; [d[ Zvt mFtխC1_ t ۓ# RW-?6>|B4%g_RVڶ}~}3Is']9}E{_c<ˍAfB[OĽncKn/{7#BawĉyT ;xZ ';lD9ND(pCMW~9[TY훨-UV"zSJ:KfޓO&S? ߛFcQN4IXO Ung|AOslqO({uKm Kǜ;P.)«|Z}\v~/WK@|?V˶nnVzΠw"|HѶ+*B- UJvI4P$DopKAxK6skVx:śFIS;o@+*mt6T4к[OM*Қq8[7 ԞFxf)x68/K-/'\Pd  a`@y{{Gn3#2fSka{1ժ.@LllFo&Y`ֻŀnP2RZ[vga3֣H_jUW>/tZ+qNa@dӌE1Z}4%FQ!̑\&`G|ߥ rt7$m6x\ ,M!'/`GŅyc'?w}vt—X4KN?{ۭfgW̎z&~ӽ%h%>c??uw!BZ||Ruj,/ty~qGvA@Ch׉sjU ezc!!SXn-Y]TUR]l#yjq?Yl(O5`s1fX1S-);/ebZ>u@z})}DqH6xś/)k)(a `UnGzqߥρ%1^y{E+ͳbÜwN p1?{B=gg]8E<ը}Sad<&m'kb&觨_?ڰ_-OټrDs~]8'970ms7 :IrVP蜲)xL ұrB i4q/+~hVo?A 7Y|Z&ioB/A.p> S{& ;]tlN0Tfȸ6&P{ Z\Ou@>A@2H* IVa\E{UQP6(|@aQ*K*@TŜ}I˙U(P!E3HV=$%x3bی-CڞF36,ȶű0B5jOW`Q,r ,bZMeCTXڧ6e (.fSH Կeu`!a0H l! 95ݜ<)ovq}(Vj3>JVWg@q +e:D{pʍlLX?ǒ7T86Cb 뱊VG=ƬjI*(Q,96J Q* $CO`*.3 \ar`0،-ڃ7Uaa3 i£]^Ux͹؟&on^b/t~:=0=[\~ a8 %8[E$Kx)"yQB,] mzE*0 ¦V'.aaN+R-uxDj]u9;-PPw jӈڣɢLҚCf$8R]L*b Qdf u).mb,z6yaY3VtT,]E,YR82F>$PD63'x،x( s܊?vDUO;Dt#"LJ4mf4`Zl7M[<8YdK0XaQmVEP ,gBb`iSKj_}lAGnٌd~Ձqq?$_g3.-.\979 ƅ,P1takW4lB#%?Doe(qWx0Vܱ+x lCR2ѫ#7za}Apc3E?0_k*_Khֱ{6w !cB#N6E璣 "1r?Fx ;@ȝ؞!wec@I`]dH"e4X͠ > p@QF.ynVG>LwcTyӫlST% Z%Ygz~[IᘾתR^-jJ *7Tt6휥2ڮdu olU_uV B6`ն3B‹x /C^%?_~gw,WߟP~]7|#.%EO)JyKߣsc¿{q4 _qe}3LɌ!CzVt*X>u=8o=6}N~\~U,'?]]wX҇߹K=&d_}Ia؛固q)07Nϯ7qy>Y)3Oyl+xfo~7pb :x#K廜,%PˣcmO5~A#>g ПaQe.y-H9]rC;K~s;{8:ysM[[ >NBPz}R''U8 SVVO?$GAl1uc qy ܄Y_I䘨҈N`dL!^J1 0T D~ FmߙٍL۝1dHJHe"2lRƬuB`Ef}h}pI56DeHD9") <L)K㐷Z=Dj$ی-m4=?Oo*%gl'~y%"wYs}ۄΌ7)Nq\tUk, yjgLhR&7`[1/~{6M\xQ6E6KDJHlFG,H.䴉Y"gAV*s sg(fy º K%JAF5CS'_D)b60Y/5m#HސBNZABM^@2 W"<(dGҀ@6g#VvICB\R B1ZJ-yV"#"؁tҘKxӆꚩM٪5?[7(MezN{8dЕrx^eN'}?~odǹϞJz `ΦiPB2a *,Yg 9E//h_" 뗋Kw/ukdEDG!r\4R~\ 隟YG^Z9(.V`y?{NɅAѿWV(tzz]]s[7+y m4٤R5Լ˅O}x$y&ɯ([Wk-qt>}o_/ Ȭ^,SF{ͩEvVqm?ZvpdjvcNOVW|o/f ڈs[ś5zm7.NNO~zwG/HHloxh=<Ц -tOXUXNO~\1Nj&'fpTrCUYF=륏,t|vzW?6tuN񗆻&~Y}d,/޽9t}o_~MZ^?-"Ο5#(<`?|4Ckkhw9F9c\VcNy`܏'q"_ܥj߾=)-sieݫxW2+pUI|_A5*µ㲤ѯ 놺`yA4l覚*GϚ#͑JFS+/>@CQ*K/GͮX3uYZC/6\R|==?D*U7eY|9!om\Mrlz#ʢrFrOgf;wB#7d#{_]Sd.`ҽYu(ye;n[_xjl*('s8'ܑ?`9[ lMw8ec k}|荑PHҋlR{GfӢ[M]?&öP rŽX F]I] V)W5iL*+_P9y *p1{P?w8uv1JN$+7YM!b )#3FLJQ-8K҈E)׭Qj2vWl tMSX%Rhk6AِMf}k6EK.EtsNg΂TpY7txfkRȽ7GFxFܣ9p`: Fp9Ry+?f_ d rayVgœօVxpYYVҲ0乵o.猏CCyɱex҇ʃCVPtF!f+J .Q7d!:f"A 쫐-3G s.&yTQS)h )+NE UQOճO9PdPV!)b4 8$.! ط "~黖g?ݵ2II&t~=ɜ_j;;Z;ROz fbva2tL;]uv_]{o}@qVcv{d׏KWۡ Az"tQΓ%`Dj74jV>~(]P_t嶠+w]^+:L:a@+Sy骣t@WOL?RR&DWL+3hw(Y S/]2p ]g'ׇUG@WOR:ܨج4ֽ,Y|]brpcYqui)š°j?^~3;;X}`59뿱ss ^(۰/Gc>;'%vU `u3Ȫe3OON??۸?_Q֎KF~2`ytvKWۡqj+AY2跠+]/xt?z2\jNWztM؇K[FM:\Sֆ}Ӂ ]MI]tj 7L:Z:J>HWl8id굵kŊ ko3oCb?sepY7'|]_[ :!F|ODl^ .616ߔ_ݛMl=.%X1sٻżh^VCCjt{Xڼ/dqojm,6Ҽ9gɦ0oY'_Av^mr(RN ZR̃<":4G 2LA!M&pd29:(`;vP!"@MX ]T!tl:JwHWjKSRWNT&Sd骣4HW޺/;˓z:J}HWBN]pf2td}BG@WO>"F§k<ث*Jg{gqtg &O:xPn`/G> Ns !7ژ{wDbrgh~~qz#q{nzuc͊Snm㻘1YCoPӥhz7np7}j]&ϯ{,xSq_-QuCn& M_haڳfq쬖~iu_I]t \ \%U ]%KQYdVLNolNKI>TJH1*ئRѹ >XWż/w$CMVLҒ}w3 ܌˥M s(a,FJY%cΑ z-!ݘh}VX%I(ZK&\Ic%F-`b Z,HZ֐H![j\67碛L$()[=[cUi{,)Y83i lvkcR֭EęZN1*w@EW}U\x I+Eh6o|2R5P:jJa0 )w")Wl4ʚXZrh>wYaںy( :4 bXuAޑٯQ攕xt3mّY+O`ɐu^4>VysuUM.1ڼS$G*mL՜ɵ@)jEY}O:|j 2&@aucp5P)VQj9J##ko"6eV`\`~W AZa!!s`IJ]4HTy+R.A  0ʒB4JH/ѥ$j۽Cue*z˄wh| fXvce[nECE EG|SZS0mLVb5ef P9b`%WRb0`.K %´[Mr G\JQ٠+.,-eC9u[zpuldFV s/GPfa)-%˾C*ZgX@,֗]b-PP}-EX|/H1l:Ulne*URXZbX]FP 8'$S, |$BȠ8A"545~ *))@5Ae;Ϊ# X2ҫ≯P ߁p #6CA!SA 2PHSxkĐk $4( ,*&" 5nf2%ytBޚ n_\[ZTE~Ŭq@p2|=c̡z;!'_{0l~/z65ݢ?ً_/s+UЌ+^ִ̻ A4 `-x6 ^#xu:p0(6@_b'I+Mҳ wSCs ~HxTH9FzPKd"F^VLta,{Ѹ<ЯT$/GK:B Ec!28hG_]`Q,TGת"?Ϣkשׁ=xf[MyY #QKOROO[q2WIZN~6B@FFxuI>`@C^ + V0 ˇ$ [265޹q$Y43)*o3 hu8=Ѥȶ5Y6;CփE/*ȌSf@QlٸKP/mQ$ڝ[c8wdA-^BHrO}:d B 0b4vOZ.j"; gF ICofXmB砐@23Di"e@Hg(o"* GĢpiXΝLkP>v;ZYdp A1-'2MCOGL{w0@!Ɓa c|bY3 _vls117re?2` uJ8R76Z#Goǒ64pZ{lw|vkpŨiim{km.M}⼵eMmB=͐Y:y3h_:ڃej]l9n(Q#p,qBihK7nzG^ޢ(B!>":^H};CA0]=2)"0%T}q1LL0 Ham[0ـ:ytH`!p}g-9j Ρx9.z"}@)}RąAR됪R,hk~z81뇀q,f̾C'rc=΄35^$X!+(5!u#ɶ!, }cbD\M3}sPy&)Ԗh  PiyUX*m ga E*DQ(- _L- =u.drJD8t8k~@3J%tÈS<`$n8@)M3Ńl;]n \X".0rw$`#x* B}SW $. ֤<\xu#BPޅ!"t%hCǥV/nbvM+7, '4jj1cG-\Go_ysi > b%5L焃^`?ӫ;#V__ܤ7 vM߿D^6i9wt?vo6oW7t0Cs6g?ލ?ћ?gW'~w~ºtjǮ m_Xsşt}~qߍgӏ[~ Sֻڶnnz7nx=?1nzaW-vyɍ i:U$ 99>kYm޸*sT'@ݸN uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R'J&/ p58-Xk D'P&H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': `'d$9 y9N $ )T*\שhNbqR': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u- ^Ҟ@'VF[UfN%:)"H@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': 'GH}:yy|qq8\ݯ=L (Iq)XTwrK&_q * q1.'}>5!g-f GYΓkqq5Om0h$g+MJv\ʝ[j9 ޥ%fu U <=h/uŧ^0/6Wg368? 3߯~y0~W'8Mf՝鄟:d}ׄĦ4.tGzG+~lwdBs}|+bm>4lw[Dfrk7CSٵ44s)xqʯ63fL]S4o뽢7du{D']4GmЛtD3?M6.y9G' 8#gY/$ [osT䴒[`%GŻl Q|Y X-U?*wmZb/W$\\3OjSb=B\q,JdGp1NNg5T+.$MSKU1xI=,8ɮXnIRpֹq* :((oL" f&S{*9K"*zg@ObGOjlY" \,'J>k'Yet*W,X=MbAV{${o,WPMMSYq~T , !WbX>bqˢZ!jIg$fզ곫bp*g+LN XnaՎ+VY{лS}JoI%WsbT S*{+ԡ9f(W,\Amqv\vT\=\A/ WL> WPWeqK>OpI Xn]TjǕGq+շQ{mm3Y&uoT2' 6}VݎAޮYؐ ~84̂S7ܒ P\=o`A'7DcIdL"\92P_b.(dȓY֋uE X-q*cQ\-WdaW읜"{gSϮMKq@\k S g]S WrbIq?q%8?q\͓yj㑊y*se ~Sbap`\A7E Xm0UFZ mAbыF q*k[T\= &A`\"&ZRb%)+V G'W,7')`Cb^E;)kOjցTI'in+NC"}l9!ѐ9֬A KOj)zގM۬tr?V6w}TB؁mOSߛ~h`2l'CixZ NH_ld [|Oߟ]Y}y^5^fV\!ֶ_~Ewg?v.~wsɎaJ}3r ͘s>tSON>ß̋r'{ջcZڧSm,u-Y+\\An$W6q*cT\-WJ*A "W,_;\4fqw {+W,6T_ hW U,m ؤ=T$[o+֐kiTBjמq16Ť 6ZokOXe*giBE *W,7&Sjt%*&[1rprzV\bW UJz&Ab^ Xohq*!/W9+` :bVYmdE=,W,8d%KZ+V4|pEOz:59 ژ'8#jr9R[ܑ =jfW:S \A E Xn4RpjNzbv$b]ӱm8_nޟvz=^#atj}Wɋۋ80Rڡň !wN[cˇ_m7q -߶§?o<Y۳kzFK3Tgw+Woq#eqx]n%~(KZ} uW$;upw%d,k[NəHΐZe9&@/@:lU/{Q^ٯyㆾۚuleQJv;)Un4)K`qdĮTO[n9:zp$}Lp՟J c=sEuBnh:|70ѸBoy۫aqoy}2X#-o{rs4Ob}Ƣib}&;aqFCĘ*[h7'ywF6$nquاU񮷎6E%m.Nj״̝ϫ/r-&~V>yNr&ʵ*7H6Tϊ(x1 :~U_e/*|9򥢽(7j4) \ְ _zޔ-ؠ@^NF6$Q>/.ÿ_bE kEq[z޳eXۻ:<\clƅUo^BSJ.w2Gg> Fz2" L4 G*KFmT| x,؂KsBMji}8b=;=73N!`P2Y+ :R/5eDDL <)c"ҫ4Z$HR*cXxzbaBzd>g3Ѵ\pZ7h.Ci_-;]VZ~zY0g.=**-rW)aK3;_)A6\OY~ѴkuMꎟq*|~|;;\1%X2>tM96oqn-gƃJ?p{KA&A8 h;QO0kAWTvXlxdcF}H 6qJ7tbi1g8&0lX;cv%^x]˴=^gI;>%~7 mWP\m&/In57 ֤a"zx9V1o=>X*KގO۞:mu-*~(@ 3u6r0"wA0-| |4zqoy3HuoLDʜYFf)X]LdLVF1&B05HuepϚ2 Gвz:̲+eKYV.YlڷHߘSY3fH9v) ۝NNH~*n:ݣC- >TmpT).ݾEOd(=ZβmVl dirQA˚}/AFםr}WxYx9h@qP>aZ6;畟3ǻE]ȱZʄ2a^>jb wUTsc%bDˣ`z5e3,v 88 yU `rzkPd|zu;%G-òB\¤ VVE] ![5k4%dK0 kJOFF#E`*5kv8E^G*#R"%1F;̰4N+!ʓH&n: R@p߅✯KYc+.O(7# #NUS( % V+坴B P pGt45=Դ"4F@ܠ$S2) EV ٠4IN[E*}8_:(1:8ZXm Q(C L0tJiOXD, hX!LϷ ii>$ӱq᫳a^Xg53Cpմߺz1ݰ0,jβ?Ma$߇^I"8# wpf/C_FMV `0Қ4f=P;.Rgpa׽ E/ u^ TE-6`M@0\:$U.z|~IBQe͢|G}Ʀ4"erؾU+Yv17đ1IgznRPQ p -6 .a8L2wd,LVTΪii ,~|_xv9;lT^&Q"6(יA ~"|?xg/^D}󷯾3_fkaΰ'~U ܨڽjºWMۡjJlu;PMnhrKM `磋0;ie9k.d0'g~ j~Rr5ֹZ/e]*/y6 ;/enIdCB\oMoG^ +F/#y4PiK/58a^TjhzlQ)9V'Ћ s1.=6=~{:0T ?0a=qNH8 ldj锏92~dOgf;oi}:u*9,~;ϡ;;G;kNx6Uc;rl#,j?RO&|*{ĥԧg_`3T^NnCW??N 32#"ᷠ?Mm8WA7p: yHEw:sxu" ApM+MClbpJyEh9<w6q2:3z/u>j^et|{CL?e`xD,D@Kũ1:aENHcQiR砻խ)[rU4<\/i]~ŮQmoO1C oHGzgKι b4k[҇wtА `k U cJ#68R$#Zt̗z\T§C {|m"`*7,Gyhv^Ҡې>fP.͎Pnlk,mC_ͯbhMJo:q-asqmrl!Pjnq#u*=TR ^}.\'q`QKrFm0"JBt?Cх7WvI&zR}nL0{ h9f@n3v@d-m^WxhÓ$ LQ"%DqjZP$Z.Dq2 ѨBdx1<ܟLO9Y`&PC@(<8=4"LsřJt=Y.&&H j#[oPd1xP߂; ) XG*S+DD~δ('җʺZGCrncj/kd73SdGRF<)p]& xOG}}f\M3 tv>jȅ,mxtE..-^cAiɌoI98I1\ٗ_.|rk&{2|p `ڔmfΗv12õJtP̿Xˑ_.v.Nw.N_K?\BRE-c &coj"CE -{*7M˘d.h<.Ovm^'2M5tƪzFmVԛP]^m)i#$e+Imt]FWFwRRm߉bBhmJJ| Z)6~U)C23ˌEFx]ݍJ9ۨdq@>mʷӹ))G1f z%L'"o{|vFogL?xD %L\\zNfz= _L~Y -3Yhj`ڼBVPBst.શw]Yr bҪR"-ĠMyk5-I$=4[*’޲K'&X8ӌK‘'ei3muF2OH9iMQC<^sdddhpάV(jS~AnNէu~PGh qUN{0yaMNZ  #bEfEqo7d:1zI]|Ud\\TCmg^ aK<PJ7דNqӏ7ݧqWWfXB*4Lur7"sgI΢9`#0X)'i$/3崣PL%*zIF{T_ 3.:$PBd:2(! X1cp=Ә2Z i팝cZ_Э$ξ dkqԠ8 "8 , *%đ<ԫAzh]Z1$5I_b S8$`08UaaIXQ;cvE {>4L~=~h5jyHXjTX%!'voȖJ4O?5@2HM| W]Nt/LR]3cgpfJwձr! \xT\FTm~Y7bpM8?mo ??|9?Ϳr4TJ-N6.Iˢ0ɳRcPZXu1)c+R4bc')T`Ҧ26 .¶ W$Δ7s?;0c8m}aFG_X;2Ik` @$qMAjS@Ef, tݦ)Xf"0[Ȭ يgW)hd G(Ed:ۛ`͇y[~eQ7;]q#FoćT&z_x . T*(Sl@ L ZݖpBlg \1T(K噐XdE*(y{>cгbPD2e{`}W}wS[wmI94 M "M4(M109džjQ„4Im-AJZn8T\SPJg䵎yHAIN<2zͦ= e"9 /.[MV6SD50G?ʡM.=GSBC䲳Fm yQ[yPIW.4Ӽ@Bt{ʈ"r*ɒNO_=D(hS(exBq(Cј@qh3dZF˫c*:Xm} { x![18 6"FV%>GAH}9 [L-dXQO}mT*P)QC%RC1F{'1lcػ\zi',bR,<)ג'9()%F-]SfʏT*" NsQ@!e_TN4ַef( 1o:Ho N6e]ɼgY%߸S!٬(G4av^F6.Hl|ra@)T@ =LN&`w6&0~z.O#FoQܭMuFׁ.c{sIM#LCS?ZlhHd5E_]V&pM5EzvJipM?5]ớ7+ҭtt9n״p8u`NG뾾+={vsw!|65#tL^[cHi4Gs_xMH!oFFn{sya<2ޢj*lwq/Wrk}Vww{t;8}7U+mY7׮wsjr-!CzC0.~emR@Rpzg:f?v]i$jЋ 7 9^*^:^,\GEl@ ^fėMdt",YCmHA*}T$֡nu` 8"E20WZc*] Þ,ԧÜҽ/R{2.@Qw/9B -yd\OddT5%0C!R:X˹ZlO ;a^!Z$Qju"y0(KZT8b X7h(W`> g)bzy k/ \Z"W 6m5zat}6>cyqDv ׻?s<{{>yok\Z?䫛yړ7;yVWQznjG6=KV՞7l롹zխJy{^t )%am h1Oo|QB柰A7'6lh\Vd]- <[@f!CdU$ BI!z@A$}` թb %&%Q d0ݦO" Vzduʺ "()g"!rsA@i#LpI:k1dY[!ړmv_AnwiRm,ת^lu 儡ˈ1+!:Fj>;@R,,3!4YRV j80i>2eHBc  `-k@HކHeĤrfI7fC%=k랐YS_:/oh{> y[| 0 K̖qߟ?j3̎e:p Aa]놥rls@d:s'| S= SЙΫ\c20Bd5N+Jh ^HR)ԭtK%iO2Tw#Pv@J'BB:(xQ t3;b ^\'Njx6.߿ݛ9EpVy_RځT]-RR+3U0#JRŀrGµXT1ZI*Ji$U0Il#+ J ]Uh*Z+NW% tЕydכ7B*%t4D'C}ZCWOC{<@Wv=h/  שc+Fk*ʍ]J(0 }4tUJ},tUj;]U+@WtGDWGCWcvߣtUQj7++T /gߕh}4&7{Rer9Noۣ-g_FqVb1ϧgqߦOjco+ *mQ͕Qh H31K_*hhqhCMN(G勤pLmoW*mލG,i#DOFolͷ&^1đ'Z5?d5qws/''([4o?罭0G*`'F70\Th*yA+'19 8+Co? =]Uzte#+R WӱUEK+F bʼFQy1`x4tp<cuw"3JkHWNkmQWwum$Fv}[J`a K֋>Т"3shDGE5SU=٧G@)]=C l`U >V7wDWrǥ9?~`yl: nxS퇠uVCe>_b[/ve._*fJ/ҿyasD={:u,}'2^mlXh p1BW{';J]=Crͧ:`:BW@Λv3+}JW  Q>&t rL{X ]uւ+Ԛ]=C ޑ=&`Vc;\CBW-NWfCWK_ OpZ~A(oF]`GDW ]@Jcn|Jǃ!]9&`UM hY=u(oot|ʚ;"`qhXv&O:J;ೠ?Yz֚n-P?/*,v|t#%-bhyW.޼7o\E@}PznꕵkBG$co;d_O\9nbZՏO+o{|=]r@gs?vhCEcto^<͇w7wxgmo{7|/ͺϷݞw_U-v垸,O(>O"/t3nkkKe_ޭ, #ʯOw{L>|B_/#OS޲@ﱈvwMxrsZcPO^Z VɦAUm0iۊ-`c$7Sm-~$T-ƜK2Ic*qd6AIq \5ŪT {տ0zZ Bq͌el8jl]ؔMk%AёLYUZN1*'E\nhb@Yk.!t*Z0v&VzZY2d ChtJ KQw-yrBcW5/-&a̎kACnuݓŋ|s!+/>fm2[ed<%C|J7Ls9 A]9Sb î V.Xml9kAh^nu}6:90 AkNԱZGqu4~1`ۈiY)֒r)LV6FK}I)OruKE JP&b?i}1ۜ'U$z1dTS*XtJ Y+-3I*R.I 0jA 1 oR*KIo0L%mT˄BH v grs%I- ObbB| 氮mlV ee2P)PyUvɕ>x3X,&F@uPiPBmJFo%ѕ]e ̽)<ޡ9Q̬`a9=%EHiD0iyM.}K>;]K5 # IL_2hO<om Q*X:UBNU~t}*y*JFjzS,9|Ÿ́"QJkm^=>w}2WY|ڷK&~G|t䆇m,z ԥOA[Q)wK$$߰%۸@R@"2ݯEy[ m#-Ah'R];r!z5^ |,%AN.|Y5#G4ƊV E%bx< t'e#Kr(qϱ[ncP:LUVDir9DH?<B=;썃#"ǦbWy? >馢}L(]5,j -flk'r(2CIj%ǿ?!uf1kѠ8֍am@T 5*x/m,e Yn$,  =b.AT`K-[@sti`sw>M/OY\-^s]w&Mta7H7nlltlfѣAp"}PQhXu+5f?k>hr9-EFlo3@9YEfz582#hBѾÁwæD5H%T2tyء9DuSXmcmF; drH 3 A)7顾z sgcP >w'4TC(M|r5pM9?I^/U^ b^SL c- -J HactT,P~ x-]ڨdr!cPi6عIѴR@2<֬] TJKPkSg&Ut$1rJ5g|r tmS==jyAVSƅA%V4p'Yq(5i m ($̺JQxa2kq/ZWF?RO4%0LJKF=45[`=GCXPRₕ$5󻉷  4XaZĔ-*5ڞ.CC-2cRP+ 2#I 0q%G@BH]~D:꟮XTz~C) cR~!㵸i.^FK[2U#t.%g`N/7Mr -ɺDȲ3^0_߼oxşvbwb hE9Vkz^ R6/k/B2ׯqv9ts{G{< c"u)N1X65_?'6P-/_-'F*XQ\\w`> -ɓOH=$P?I# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@# 4@7 N {<߸ M _>C( "/IIIIIIIIIIIIIIIIIIIIIIIIzI 'pDI F8$PG@/xIH=$ЧKoѲ?oNW}ݭ.2e[3PYyq" />wW(Wם]\eeoL:p媀Uzbp1-.<=?OWɥɽj%W;,P\KB^Rέ7YeHZ!YK X '\mź8˿?U[g%yi @b5ؤ.YY<=@A]CJf|%^hܭ^n!wAN\F:|ɏzRŗ"~2qeC(+Dw?aSj*l_7_o nbZ^\,ξ^c )q I\ דĴ~S&%6Zb138)Sң+-Eb&u] %uJtA] mJ(Ǩ\RDMѕST+PJ(!Fq] 095b] -u%1F+"(tJps֢+"t/ + hz5-eWUOe`]ᖁ׮hh2XG RtԮ%JS+C]Bn]WBF^t"]10FW;uC]GIt5B]+*ҕ  e0]QW V\Ktbv$oU&†c2c'guܾŪrdރuŐ~w]"@寻=Ӆ5N.>}N*i䕎6oh_:Bݢ0+Jwj|X]v,),v{HlľȈK_SIϴv;Qh\wrDl\ļ0 /;L>_nKUt% }X~Jv O+%u G<Ή+yUvه; _1<-zvr\g:yK$^Y.i7N..]r/τ0 RS')1sg-392gr)Țt^7] mJ({;gLWU PӦ)Fg\p=iѕbh]WBiFn+z+-j^1mqJo0芁sjt%hѕRbuLۦC`VWY̠8)vVW7ך}tHW] .-b?vw] %j ] p jt%YMt%Լ8֮LW`.(ҕ] n$-dP\0]PWown>rCv/hq-gӎ9]mܻlަnk:-`}-:\y7 0VT wz崴_w>M|0E)m*k׵;4Ms8Ap 2jʐ,na EdΠFWKjVebDrJ[DRL"] p] nB-|t%MW#UNΩ{=ȂԼ"PfoJ2y nZt%WMYluE)gU;XR Ӗ[וP3]]-q.U.?h@ (sckWBWtԮ/t ] nLZt%T<ΛFsHWAzt%hѕܺj!"]10*7f-ڗ-2dEt* T++)װ9̖MLŀV1Z2?zB2*P鄿>r, Mh77hz؅\39ͨe&'Lߗl3b t% Jp_Ei+J(FDn i*GsJp!hѕFl]WBu):WJz띚]B BdƨBe噞OW \@Ot%zv!m ^–] _jGœ ֮(P`zj*&Ebงjϥ+MC!^E[<+*x)*OME69h]WBdt]G^D5b\j&B Ժ2ƨ+9&] 0鉮85A+Q BF^=eIؤU-ӤzR5Po\z _?hJGn4dv1iZDf&n BK/"3ej-↽ )e3Yb`pz9Jhckʡx̊t!{5\R̴(TLW#UyeEb] .] mv)7]QW&]}=7Q$d)} W+زСK ԓKp}m 誎27v&/T LWOz@ TF7xŃ:ܡ j]Wԑj ȱ*`Hϥ+:Zr)u&)] nB-Zj~2Ȕھ+^t)KA5TejXM/[ߴ$kCMϢeQ(R{aZTÇ\4Җ2hb}`&,A~ v@)ɡH539Ee&' [ eɍg&Ǐ IxFW] mLJ(jJc銁=م Q:RS{7J]eE\Jp)ik=`JJ"]QQtŸYbܾg(E ~XU.ɠ늲wPLWFWe(.\U.vUGG<:ҘBWhzjϊt%GrG=bP@lnh^WLYuHWL^795ѕ׺2zueχ)+uf]1m+ *-e҉W ͒NZ,R1 %Q̷[ħp)av rYAM fUiKq{ )Y0¸!RKI"E] -u%9FDBR+Jpiѕ^REʌQW+&jt%kѕbduE.&^I;/>kѕ"+L`z5[v}aKW[]1mru] e/t5]NJt% Jpע+[וPhgHWLz&R -nJ(LW#Gi2JpsҢ+y] /1 nS2D}Y։65 ]jb~o"m(cvo/x悃y˿74|~Ӌ7S׻E.x`Oby;g'A(r/:ɸOa&}m\zFM <,1Qs{'f*·&{]EnOÏ!lPwU˥=>}; (<07Q +*ը(y衇 a.EV )A/-gDz}6Gx]pCBUAD# E;gF_\a6CIB8ՄP jB(J(j!Tfþ_,g˫q2;׻._GRNOM_Y>,#?g;Ur`܋'8 tSB\t>-'̋%^jt@;cuHuw'!^p)j:- jB*Mj96S!P-cH_ ܲ4o FN1OۿG%&xJ6>䟮 tP40pz^nVhZJ?ژ2y{zmiˮORJqp]Upom :έKJv t9jt%Jh_PuŔޙƨQѕע+;u] e) 0\銁=ѕঠEWBK)#ur/۷:iH6-{p;:RNQ;$4)!x״BʊHjEK 70%lqyK^85ܡk5UGJ(MsƨGtA+z-jy1-+l-t]H%kQz+;Jh*'缽JHWrZ w?L65K(MǨ+JͩHW \mܤfR m|=[v}>t<|6\p COpq誎6 f4f0W*>I8D=LZt%U+ t5B] וlcpY] mu%+(JV+ƨGW[iwJ(F+ [ծteͦ1[t|F,}'Ta6zՔOPggyqg[y\wpS1yUvf6b is&`g%>^gg?D539:ڡa^8o3b)EbA] mn^WBI aJQlt%{HIy.] .]1vi\WB 1*Sr銁}jt%ZtŴ!7Ǔ)k1!"]1ptz&g2ȴ5iJ([;tʖ]_ǩ\WU%]*wzRu4PtUEImI/*vwŨHWLztŸށ] u%LW#Up ,/Zt%[וP&2]PWCIHW LPJpKM+ MWD)BG5\Zt%1+t5F]$zReˬiޯeXOf@bרY0#}8ۛ^9 !x,.!,f|M]W0ϳE\/nSٜշZZ>2iԧw+Y-`;o$@gǛo~tq|vgg>[|~sl_Ew݌/_N翾5n  y-s)2[|+mHMvžXE&ׁ0EizHKaόf ؼ@v38ҩ_1qM>ͫl=B/$C1lϷ؈6R[=q&RthJʹ0-RW@`5ʒ[j%n|RI;[TW*YLi04tPIEIuRm`-ړnfj aWW@ewTWkLN5{rz= ]z1 VPm.۽xAx2 Fa`%ʤ~Ekc^vazj>3;1e+G\2%ۘ_oWr4,bHz_fXn8z~]-"BK!ډLƯFjΌާ`ڙɽg JDI/Ȑ:xYym{?\]{ƀ fWK4ZA 99{ " H_?=03v>_b(,IGICdIgu$st=X=xii/M Ct, G h"D}'i-P<.Jߣ{*$)Ȧa|anmF" dqBtLNfLPWt`ۍ`]GxrRNj`7Ĭ׏ cdaf|ޯzׇE0 hww ?.dy|f8 HiJaL$C:NLKS|iw%+KPթBJu>e+ ,gK:)!X("ҟ,k8O`TFsܫq0͇o <)jom<<~ `o=„Y6DE'XT5B% N%"U-7 AҘWWgWdIG`'w.&ߧ+;"i@ӊ%dst:KLRUPvœ QgRTg876b|fߗhr癬BԹm}@5 E8)߄YHdsԶݗX@|mIUuI%DCdUq!s߿f;ϘO˳Nm;|oO,=_vWNȞF'd2:)4{eh>@QͪMI#|,n^lo6gzz?`#ƔD1J )1 (eLQQJPk=9NR^Y~ae:]蟫7Jlge+{ڒ$$H#TTD* apMq 4i5Kb axm8(˳{Y4fТkfE ԔjT *J,IdL^I`; MIh +9IcQnSBT{Ov~P-K\h+h\3eKdG/c^k-Pr ̐cFPՒ4fcBν}P1d r'-RKpݖ_!w6f)7)k;/kpռ͹*͎;i5!pkBIdbF8,2| -:'5NFd%viP@$ϹU6+)Buf۬"@ P!Hm|ٶȩL,3Bvr+dt |-N冻gFmZ[L$-8v2؂t|,=.c:^?/0[+O|O! i[ycSdfJiF{!*@̨խhc]\sju: CXoͅW>m}ZYM;d[_A,W"gج9/g_q6T䴉6kw- 3m1s6YEMgf/$`I.3}9KS)Wݼ4g jƁQ]uõO~aږNɋX6to_kMM޳r$VuCYwdnhl`ty9 83]e\w"ˈitֿ3Ly܋)W*Cd춉VP֢[k嬺beFoEaXl3y~ƌnTw5~xE$V(:mF战vaVZPJ9ւhl@ĹY%9w BZ_Z\sp2:!2.M@8LiJ̃RDTĈJFN4n[؜9+~W|iujUDx%!"+ 鼤Ĕ l׶`!7/dZHNB@嬙&k40u_Śml[!4mrV2۶[dwW +21ݢm{XY^- U5d{"M 1ðЍ3 q i}c-ŕRnKΒtDIvIgu#ܧKۙ#X=x韟i/MƽIx ?DOs8C`& .‡v*$zB< SR#'JG1KCid>Nk/5{M&\VbS`Qrg6a,Y?bkk(U$02C|\A\WWϵ le&X'GҤ +a t;[D= :͟{(F 1TOT} <X7C{F[H(#mR4? /4"z4`ʃ'͈)KMYV)#0Rbzxw>{ kMʆiYTP)"G~1ZO} ӁZ6y1A^5d4},W/~՜×t^)yf%$\~6JHAI0ǵMs^X1U<Dt2^W9aW$ߚ ۝dh|5L{;(=wxM&N[/&^rr|[H TDQ|4VN9?.Kbы4)ϧ^f*[!~8ߏ=0r݀dI!SFSz #TLkXrts+~TK]u7A<<ZQN[ -IEyqK۹3=RsG4N*n<=b 4ڪ؃:jUcELV"E0'r/B}^Y{BQz0]81ݙu{[T(OGQ*:A%WD~@=Z$}"mOvluv853 B6s_E GT@ ?hG3,pMk7jngKRfD[&?VQI4lޱ(m}X01oIhV D_Tv?z: C)dbt;OU]Kb䔡xInߤ8g{zg $BE(Ml1[ɧLŚ-{-ʪ0ҫ \(htC?z$#_[`0BU5ćQyO3VCW&v?w#z˽/ѯ^&JPy0ЅI7UXjS2r _*RʼۓҤy~(m-,(IcMyq~1`1(eT'Jz8)S4et$pƖUHKx7ɑ48bFC 8?O ehrY%_T & TN<NHJ`p̽xq0o\Yp#25xà(sZ# i؜?{|[hOu8yv'"qC̨_njAV)mtyVY4u?A KCSDi$e53tMX@ UώVF WQ\,Xk+yۮH9Wv~z5Ƙ^i] Tt(k K5m/Ռ:[9N+*:}HrJ _rYi[k-_&Δ/uqO$yMT0VSyHed6L?~4QX,,rlZg,߯z#~~\&%|m1 Vm:#[ Ȩ1v}放Ngcxe|IO> q:Q` O1C!F2b)xa;=^D] B*n'1I2@Td41j3WoC<EƆ( T™#GhX9L4a\$Ou}R-C˩F5P/9H(R/JcIJGٻuWy [ܗ cylrׇrh[Le7<>HVVU}-,O`J>U0@ ! x|1\U2ݟT̏NR5и `!{u l-TF) Iö0LRE1+G\e %J+I3J#ܧ R"2Z]uBqֱNS4dpru|aXvD$zb&\"#+o J'H悚TNOt;:M21xP %YjZ=Tis18~O 4N Ҙjj:ZN&_d|Fdd:\~L9H%[fV!c(>+8\(|q ˹]SShŸ@aL{Fh} >ׄ#2$(Z4Oea$âT4iryC] *FWjad#S{c:SʷeǮ.gݑ!dynIܒj)X슾p/sWSɘgh) 4nz>һ_gk8w@c\5jRYۺ$H_>?IId[*] H̓| -]"@b5\ˋP@.y2wy$M@9.\*=p*{`X lbs1U'@N';VjR̾kJE;߃QGӤJlzFpX?~4 2:~l8cgtWȒٟ?(,2?|Vԝg#[We,ac[&W\.5\Ϝdo.Dy۱tn)`DhEWW{j{?zEXf=KxA1tk`*zXuMxt9WRl4,ȦC< fw.D3vulj.A/QًOYZ㎤w¾sd\f)'#]Lt 3qsû()ad;fGUd w]Wv5\0ȍt4[|Xd0(??:b%H,'|'/q1*yy:;O",WwDrRe$L_w6DI[~(;4_nۗ!/uF[Epe$zRiu[*,SpBUgn/ɄIyS\`aU6S)nrY=p b>yuhzQs--ǿ#$mb>73:17KNX,bUI_'i 2M*!׉ 5v z%ֺ4^3wΑ=ĭ{9?:\Yc`;:ïw6PY&dLm4{s4.C|>gLv)F;]JJ]B3&>܋${-ʊv)z$p,gT{mQߓq@)( >z7^AO 'zX/&IV9wE{jBT~>f^T4shg0vJ^#d7>mU*֮PRM_P1nf(Ts:!N֯CI>}\sQ߽}%ϥs O`!eh`8ljsloL4G3N^hdsY\}b9ƕPe@oٻ0 cӣ sd>t,cs~>}ޱO)TN-?$f3I4WT.g֗wvR.J 4y/^.I֝:Y+Z%18}?CT0|:!G'SO99e>JDgpw2]\I1xŎ>d}[9;Ӽ?ks,)0,fx6'ޣ` DZ Q&fi/7b(ׂ 9}=OW7N{a)ѿaei%ϵ[gk徟j[{s@i_V 4V4j7VMkqY؏HVAJ'ľӪM\1E&T3?s+ZDql!$ǻ Wڶ`IH8w% U:h&ضA B(XLwcfqG\y0܃o(LL$Ẻ_gk8֑ 4ABBVH}F:.$"nQZDkIQd* -5_|^]njǰX=;(u#.-1߲y0s3SI5q@OĄ }9}|9_b\w?`1 4 U5.*'(*h=3&U鈃ʚ/QIBR)ENl,ΥI(Jtkr`^D0 DƶXxkKjObd<dW]߾Y``hMY g*ITh"MQJɬsiΡ>`![otÝG@c/AO f.,IP)AiRZZIgJ꧅M4ZbLb / a /*2DBsS@idvXRb|g4aw`~M{r('SFs/6hwHz0o6șT dUgˍQMQǴ(˂qQt՝R Ye42碑 V=*[>:oF2Vª C&JjRj%Ϛ?OT1.}/* ªIW|Fdf0feT0̰d/T!hzRbZp7^ɤ",[O5siljb"HhxsOfTHrvo$+geծ0FHVéh -v wF`ċtn<.i\.ZfԠGg临oAqsdr+7p ZrG{[\PwLZ=fc#,-2<1}<ÕxP{&[pWnd.FT;UzI"QbB 9pEM=/htĒjPM#_ϟ]΄6I H>[Ђxol*ɣX,;:.(S(?pfh9PENQ'"IJ(&*+hofH룓oq`Xa¹`L3Y TM0k\&#fP[V5?Q)p4͊Vp@<rP,_nL_ :hg;H/WF`Y{#@"RjbRYoFi tBRTNR'T@ʹi3$="# &P|SXPP/Gu$;vU=ܘ-(Y] $M"[fE8h\eS]^'W7 Ȕ'5ja ޛ,י, Y* ^뼟pqQCC~\6`^Hz!+Lb-^j+@,R]@-G{q~]VswqLp@vן@}Q34bGȹYXGGF$,Xzg%Ŵ1HXPbEL2e!s?V+W/Nnp~|?Tp1(v^! ¸cn%^gbaю1p-!"V5t%ynAgAXmryPp/-vn qV4׭ ⠹%$@>;;T#Uc&Y;=2/k6Rym\-~>Ds6->[3LM [|#b PLSUYF*SJ{^Rbgد>Mާn#At%i{ sI@f6ǨG4֢@P%\O怅Faw>Ea1C YlWZAS…X5hl3#ujUgfH͉ǦYݔZ8Ev1M[$(rh\t6 MIKI2kQf(sP֢4HkmM a8 qw)+//#DP+[ 焰WA1g׭Vvq*tXmAK gPx˛8*6%3=aځc֡k⚟!U28]f^ S"mĿ6|{OVEUP@;J 3R #ע0QPw`TB oz(`~XJX3@5nA|8Vg% *4%URZZ!IgJPo,uXk]gH|vV-.pt\BVgBpmÞ פz0c2 b΅kH_*EEϩ}b {.-p؉W;[{N,rDV@öT™M#1t 9Yf81o1 $h\i6#ԐUzUGh(5ZY1_؟l4J蠵ƙaSO%u8`-Çx^V&qV9$âT4iru=ŠRDY?]w;*+?1# .J'==Uέ&9l(=N֯+MNP%ϥs Onz%rC Y?<NRHSӮHAJ@ݤ:E~/:nL`E.X%ғc-Q*PNFL^l5A@G[ 7= oN@ RaTGwsk4|  1-\P 0`n^9|VvkJF,v\ΧiA_~4__kqN.I%B봰%fAJCq oi!!d0/^Ju9]M&T Y7SN4G|xM`H Ƚ:/j1apyKѽOVs#;nM~: ) ? .qbFnX.8-qb&1Y"r|ӝ˶[p00֗꘤>b'8bpÒ@IۓOǐ=ӳۡ9+FLՌTǟ@bIp0ԈT̓>4T{ dH+}Bj`XT%"AN%^G`yW:$R5l#ˋMLg2/ì͹(x ⫶D崗v؏;pQJ\K8F>83N:9^t­__8<}Q_Uipi\7&D-SG8"NUQqw6*tZ[`_2P+i78R;م 4x0#MB צ^#$K  ç*}\@wHp&\uq,|7t3VZLx؜MTt_+&cjO^GyޟTlcY.֨nnx!*-=Ned`3!p=U;(1mED(m2Rմ ܃JEsl0}# .*7h!(}c ֭fz됳5C5X qdIFSۛ❖ vRҰpi'v]= Er *=Zhtd;QՋ@^JR\Ԁ"&x|N0N9WS>yM4orR50>v7npв _szZnH`of{Y5í694-w>Ux WtpYT!J4q Ђ3PBq1 BrUhj\3^z$/Jj֑W8&&qXk@}vZN=E3ҕx.-IfqrM JhQJ⌧Q lFp΅8Ox{[-g:H*wM 6H:-ͭ^w6$]b"Ѡc%QygLiHn' c>a"$%⇸7qy?oipR~3Ly Ұ35p]HXsQ0q~ @)+]yxKɤpwI <'% آ%iSUs^-:y>A UFX35;B65WZPQr/b9e_9S\i9|Ns>J=+UݨxpwӔ(I1AqZ0Yf!Fk"Gq M:[ЋeLѢI模!:["l晟#*J tK!/hHOSO]fQ Dwr:|霾iEA0 g7ڈ@sVzn!0ܕBx%BJ5 ԒPzz3py 4cJE=?K,:}| $S|buՌfTb&ܬ$1/9_1ÔqZ殘z{L*Ep'ۻtڎٕ1=5`kv09lbz[g:6.)m(n>O[ \9+- _ǽ+Ƃ åE}hgf roh\Fio<ߣx{`jg߯ \F۲./.tި_KBb&{$g_z|yg` C7?hvsAo+4iԎoYMr<N$gXN!Ritx{Ad1`T=,5fyR8EqL|袅 ly9F9ll qZ)d8)[5pHK2VW^5_t߬gb<dߖ q_Zmo}nF}CQ2zN9Smn4Y81M7@ 8M`B5ӌg9 _ɲ. p@_ok ^\N^zN/bˢa'/W#\_#kt:^FfIۊ+dZ*2e7Wj) <5J7YK;ҹ,r{[>Q{ܚ(v-.֒ C2n/%=\e;խо :$TaѩUn{c|2_[8פQ?D*M!  GqugA^-q 'hF="8\aDžnՉ@\͌@Jm*CQ Ǝlg"UQԮ'nj^ār<>ʐP-t3, ^^"ϰ bmXQiLq,u2Sj]<[/^]OraѼcۿ$VX&:;NT0'+rGGPFG S=;rDQHL2UPB$S*)hc7]6*3hħ="]4q0&ԳQ{Uʳ¦SL84KPPdbQ($F50ƭ^+d" ~WDjԕ[ŵ'"7eZSB'X0EU,3&CQ u7s U$RTed0.tt}>u e$QuhV8 p[73鵞FdR{&O䑺ge׺^C_)mrMAv@>cBL &j`(Wqxl"Mf1J8xgxV&dBlM4){p)X&I (k92զ@챺~:`bq+`c Ohd4} i<ن|b(Fc4o;Paes({tΤ0AĮU(!Ԁf"@)-m ynסdM !t.^EjF$RZ~~L>0 (\Pc"DJMB~(elSt[=>y{`.p< Bu$Umsl}3j1+#ègwtQ`8dLHƁ,@( $TgB9zY7\񠉐vfg~N8 B(jQdfTd) MTZ(c,) eb ? N*QћϜB4t\,{(Cers5d WFwPG֦_1ެݠI㑆CnprZDZì m@B{5tB>439u(ݥѮ+["ȵ%rm%zKjW ) SL#R̤IbEBrb2?]eW|/3ӢT(PHӒ)kZL6DsA7qY Ν3i`O5RD]Hx&(NrX԰}50\ݺWaV]5oy=(G #M[B ~ǻ&]i(הX97(Fi*3jBƊ-mrd8"xDz_u.^yLS5±{`VvY)7pbCkx{" ʰ7Y-λ!ZMthpj(x X`׽v侼򧼄n;./._ry<(:Q֋QNퟸS#yq?˳/ ACcf߯^lE/-Y|j˫/pl\%W_0(\6ؑ$EeCI+`W_>y_(̗.HstOWC>퐏k;Hf4 K SNT\j*19Xs +~J!ThiYxKQ%X~|F,?]?{WF lZy] 0B^%)%ۍE%&Y$ [23Bv̋Rg2L|wwϋ"ȚTRA$ =A*d΋8|9.JM-ordm=-6>)7pr.tlg,ѱ%:2ۆ^* +$J#WE! i /E NF5"|]_SP=]_S{׀Р?pAn)" QA;j㐳NF *d (UY*:KE@g\TVqO }XcTy*#4Zΐ%3'q Z 7 pkSgPCirЦ0*xF0'$Uk%nkMs@!CXpYbR 5ؠJ/?q2,[OOY*O1?"OL,- RIb&X8&:ǶuML)& < ?2%Nȭ'99Oh`N&sWFC`ēb \+F7SR 'C0\D9b$y(4);ud'qvsea&,|iAi3C/`8 L(AQ+Qoi~^u((þN?vG̊`;MuJyu֦H[ pf 5LkT|5ZbrwwHǬF w>d5k52"oq)O p2l# |aG #gNF{e) ߣfwh lj2~ɴsLAXf.T 13Q Y;SӜwՂQ.603;νL\(D@JE<$,dǂF`W2}TZex`tzllulngxP2R7)YZ 10ʕ+흓NG]H;IxZ+@ε^O0"54ߣ2I+dI#hk'sfj6.33E{$e2G8pp9p',Ӣfk~j:ϓe!›/j~2Ll6q2Һwvx5^ST~c͉6`_q?wя MTiE_F, waǴU^ަ\jmg)3ޟl9Eջdx'感oaݼglKp<ɸw\ {oAEw|vGeYP!㸼ƈ~6cIN Rd /qzGj6I);GSV[S_>C=0%>p5/,W!trε$[G o,X5BL{M|.`kP7_&hVJB m@wRr[ҽ[1E&ydGfp١yyVk۝Q/qjbMپt\h~lpuA~T~?쑾QT!yo2˽S9n3#3&"Yj!EG/8/D6H yjbЏrlP5l"_Mu9|QAfv;90[G;FHJdU PQ!/i>L~559DݦY|\ "f*Ub=hWV3ͬWn.g+N3ӬTygJ 6aCiB8BcTpa) "Ex~c}aЃM'Wi_c(X9V5<}4I-y?9=DXm #Fp`[$<8C,eT4/+!^ʛiIt:+b&ج}V!u7MKi#jI1PKB{ZXc)ˋr+O%2It(cO=HJˆn0yLNȉmOä;IM}j0ćca^u{n&Vb1h:D+|$ 2/?S-[Q=C3BpadGCOV(j ƞ0@nY frz=}᠘z\3+Ppzb|FKHQ4A?4% bG3ѣqK* *J(H9z.l;i2{.b S1ݣW#w[‰{_ȂN."PN2ZS, fy zp:oϱ Hc 5gqت &Tnb!~L?)C^B`*q7ѹZ1TiBLNT B5҄[>(0]9tt[ΡN]kYl=ltV]͡_e'0]945S#rrj*d|%Asܖ}7X"40j)aC'2yTVV٭%OI) pN|.!#v+rUNÒ9*l_оFbƖ3z yr]F>K* Mp s@e7lk>8L%Ty{؛6X;Z$5g?|m_8q?c|ٲ GzNNo*r;]cW=u/$}G eX6 Y E@xB_Y2^w7|g] l-~fgy`I"GPh^:͕=CY1XBpddiU2\(l(WyI'Mǣx'Ujt#ԁ`Lhɬ ǜ ($֩X⌚6V!(KrR sf!KR%dj 1jOtG$/>?9fG Lt)+Q??F[|5Ҿ X`iض]-jU̿~zevC]9WU.֓m4$L M-޽.^B*h:g3$5& c^ BCK^VRL_Іfc=F\j>3D6mx@ ;$Ʀi#'Q]* !k:v/.g?~N&ŃI.7+X]oXh =3=Ӗ"`85u\ 'kDX,5DKP-ɝLy mB[8& V(;=͇OE$nP A {avafgy .~q.21oB0^cSDKIO}VgqG*7 9w%p6z =Fߢ (n:.74ּ[>eT9U-h"e~}}3t?:H_cV[+u" M(&':E@bٴ}gW=$2,0ZƦoLՎ`Z>͆W6 pr=qpןyv_N{:˗5SqBJlDPVn#&* Ͳ2XaY p(_Kn|Oeo0 s?N텽f42+wcž5+͙uzoև+{'iVJ=yڭ9iCi ?O7s*TU(a%d[B{Γ8ONZtzlP /o?Aa_exE[V_ڰ~5+Mz@x0k?ͪ1KǚO` L<&eܻ ;j<{DF/ls5ޜ>( Z 3 Yh3oQ(tn?b'jw7}/s8ج3f1G5{AmБ݀xNWM) F3$=yt߀ z\<*~+سC<"W6`j~`V\1^g--C?ml/Ҙ)zlC4go @O`wu:?6B3FX[,M0^=O-z)ư:OJO=e{hHg$#k٬5/}7C$dx_<_Vjj17~olV{l$Rv샹 8<\e8W>@NL'Ofu `3v~r\_g6tK M-6sGO0c ӀE$tfbaM,nmWpؠRrH`'0v^(2ْ:29NV*&t 7rlX⹲vN~Ns2Rd*C{"l9y1욓O<ـ! =ox:$ބc]=AYQ#? ̄_n7E Q&.19g -G^*_.4&ϟ%U}loW%xTr`7$z&$-hg'g+?$Z.AK+t?y un@R[X2~Ld923ڙiVwWzU߷;k}2,Nۏ|;K>/\wL_E8cˣoյB vρ6{b#q\teITGkQ ml2hJH=xP&@vAyةKӜp1t;_ S r%!۔0t$Mp:Z~zٷ'%HL %Vɤ/ɗr-C$C ;U/-V:BXw(Ub*eT>EH]eJjH( YSrqU@>® I ;V|8- ;zE\Ўbٳ-{=_"KM8`n这_V7N-nq<-^Tl]{lxJk1Y*'Jr֨,l׊BW(Cdɇ.+\b,$0;[]E4.&N1[ -:7דO|@ E֌HGoMS'5p F5<2FB 8ә=5[`k<{S;'fַl@1οdVMa]ħcH~?/trXmYs:se a5Ӄ˛7= ε';q\{:M^#^`l"b=c^ &c& IJ^pDD5TFTMWKNW8v3p񭭟AtJq1>L'gG˩'YRtWwE*ږEtыfT#Z=9rZ&S>O tŮWw_}wC빺:_oPѩ#S2 ΊD+^<^A^A:W@R8,Y+gǵ[c'§/^㝧92gg, ߖӢbu=^Nb2%+(rL95God>qa' 0{g'q8J6^ ; ǟ:r߶4J>s~{))ZheJn7኿_R~燰mOc\*hR~J>݅Uy-bȈ k 9`5^W Y2,eQTڌGeor#?Fn O"k46AEYaggY PP{=S;b?UR%dݔغi~%d! Ze~Ee@wZ3fƑ2ؖ~5ɐc$- Zs+/0rj&A4f1~XVhV-,~e(ή۫~/.?NAwfwBD-|%h׭‚'TwqGf鉜6|ߦLJoyE6ߝ^,>Tl+N+|dy_be ,>?I?+e*"N̝?>N;+ߖr%?IA~ ?\/E,}>mϤ3y2c,s駫nmL=rp_tl"X)FQv"c'FLm;]2eZE 'jq%+%&F@p%T2`ד\>$6f-rz;M1\Iřul;SGZ3t l]N_b2nRs %r>W"٬ĜTIz R\@YQ |UDմPPY(cͲyNŚ!krTW$\,&1Ffqxz1}p?t 9X% dk>|cr6}=>`@a9G_`qL\އf)Y?䧲e,e$wB)hKxbi)5)h==_"8o= NN}[rLb 7:9wk/~f= {vJk(HJ5䠲(l]{E c8jQV2:y*VE,Ck]4=bp VKIQ4%enAwVuOOӉ F*H#1%x0`;~06G&ֿpHٞ'`ng%umq_}}'y2=MY%H5gYx>XSPun6UUˀyAo.c]G2Zf3)ZA6fSL);d}-1R&%QGc2L [4 heޅC'',+ITej셸 H@A9e-IVws9317iirjH *94*?fEܫ21bfە\ Ԥ_|TkH24UF f4ERbqJL yQI^>Žy#HI~PKҸm *fʚB%ZY5QVC dcԘY7cnp-=㘣xxF4B)[3 _x FӇ:JH) J"cRjtáB1FɄzMZMd:'9Yɢ9|5<Tv-g)8rt_!Q5Qx|Q@AOk"/(JNĎUDpUlZ^"-̀N '\W(c4Z ը;JK>l;v-$SRftk%Eaiu|z=R־8RX)iǕD8_k"j2DQ fCP(VІ^$+diS>!hN\d> ?4cި$!bAmXH;jkp[ۛ32e%kMZ[|1$!tWT!3 WcbQJ'E t(V'P oIzfh3SD"ѝ҇C -IԈq}U,|AfoE^*4/(XjU,5dIG+`T`O"9/#$lnQA8fZ˭2\sGG^Y"Ks2eLR&3{RPiJWe ^$BmjL(ȐUɆU{@:¤ի0;ACѺPbʱcV0 98E6#c<'Ɏ + 4;%u:(>f1Z"bMXV"*ٷ fXl/0SQY4Q$T@1]c KvaQ2zbemCdc@,)`w0C56ْZ|fFO?]aܴQH@\HRV5srSOVgjT,ց'vmP[=y]-1b7{WǑ}*)ӌi/ ӂȣRE$ey-j#۲؝#22[q̵lFT 1ǜLWLNYis˨U&2vJD_OWL*\0f;岒l(ٱ9 6p7#|BHI^A4;ՉNwobY8xLxk0|Zu]qŞ@mY~|?gȥ.Uer)]Ld $y}NJb0Kf * ͦc U<ZOfP-X5JȌ\ouR">N-ˎc9N\(0\ ,\&ŷ1uANDORLjVp(\qbba$B8<5 ]\^~LP *NZQRӣe&&?WZ^(Mrv( HޭI'U6"2~YmuJ75hZWA[53@ ៜsZ|FB!vPʐJjb T&KB;g@C_*ŔJv>v60s-3X) *-q&#*IkD&@FYo6Vߤ{|Z20sv{N)lUQ,_Rm֮=e=֮/51z}vJJSR{XkgO=DK>ӫNUc J=fhצvmhצvSGG5&8s!4m_緒֋?7&71+Al*AԄ٪ ݸ-fۥ B5@E4F%" -=*xGt?k{&퇄a5X9c>oМ oG69}Jٶ}zmYjКcjLSaHv:gTFX/Me~S)u_OS{Њ-ĨzR/Ƌ>rٍ͘a)sTas"_s,vsvolͶ-`^Zc @}*YJƓt~r/rurujK%eE/nl*?~_s[oW~͊ǵ%_m;_*s]uHk6Y廋˫P~:=U1[ۭ}aտs/~8w~}:;z:!KŲxӋ-kr%z0Q{t;db9Jv[:TaP/wLy*[7Rx@,y:*T& x`V&xJ |$v9ޱgn{$EAwEު'mF&m 8`o{Mˋ  Ug=@%O @|ސ$H%QjMkv3풧l&x!}펐T\**f/aΞӖZ$B*]\y6wUr15;>qYd .GD{[Bn_S1'oNGBz@,y:* Js@A 0Bo 9pM& OSX Ǎpc8wy.y:K 춷삷&)c7&Ae wT ʨA]=1)N]d R5[<`w& ;'0wLD,~z_D,7홶 AŠYn.$&5d4`!=zQi-"9`UXҒ֣f 'Q,޲[!x6tSV /޲[6$߂Ur?m%Ϯ/fcbt6"܅s>}< A_`gBn_S)LEuMiJK)&Aۛn8pM4m,簖Ve{VdQ!oUj:w6/VguZ:So'ݪN_tcG{jNBrDܝ ɲ//ֱZ[6q6ƐO `:{0} -lg_ͯ\^C~Ck=_W^8Ww?V\~~wŻ2.êۈ='k3[b[fEFAVm:1 n<MSX5P& :;⎐Cd p8[g}Zu.S#tG3 $,gk ^]Z 0R.mD+,7VkZd8([Lg2!Zlg ƸkWZ\i]su͕֭L|ښ3X \$VqI XM!ئ +䀅"Q2Abq Á0(v Łp_R%ڐ:Yxaq 7>wo.19%L({sPDĉ+.d (k*ʛJb ^r19g@&ےZs9)qMեB~ͮ/:DHXsƽ@,%N]?;ȹ Q4NհO0rE%P{=XS$hL!5Lݧl.WФ(-1}իbUr.j4Unx=!NoL M54pUjQay[,jUC3uw4bH U4;=iCGMRo)xx Ih-@zj2@}+O!y#NRE~rgZƙǦSW0cHljU8ڸ6uf$EUy .m6)GJȔ oS1K~[O g-? ہ)ahҊKDĨBr:T+]+MPG@Pb@lR&[[680X#**Qe*WTMBC&V(#B1)mS荲|^hzcAԖݛ'2ՠȥtۏ z -Zצ, ]30~ޘS|͆ͫM~;e~\$7Ϝoۡcٍ}?jv-@s0K:0=2|P2m{S,|؈wovLXfeWůX;QMMG+mS/):j]d76TZJR+CL WT駞B84|-CA@}Н}A؊Z ו*x}߱UrlQ#8QC k+Id jYffqKVCj$R',}wNv*WtJ*,??MzT*]d;(9>(PQVR߃u*\n\l\K 7jѶ =5um[5?<C V vƁU[9%P{0v.y"Cۀ7pĸo ?X#5~{o}9|ess9^m!ZJ.*YsŊVa>n[6pf)J_=MDA3#hiC Z8yZJAQS$)ujבq稛`]t*ř$}MDqjLHUZLNo&C 'Gl;W<JEb, pSacRL}K\:Q#Cŏ5eAjMCjdMY-OTU$Ǿ\ ER q%MHSLѽ <'Gۦ.'/{[ky2 @K;ópTl2=h:rt?XW)Y5 QС(Ql`%zsSڊ)IL]_'nd0:Gjn8?YH$#3vV.=J'c9*Yk 59pMLNd߳Ў}ڠF`w ʛfhzYb{%=(2&-H0\OoZeWo g\/zD#%x7g&hpL?wGGڞ=qG(&oNx1^lϊ_ѣY"IDثDD4=[!GvDD|Cg̈Ϫeޏ݇Xh=jvB6Р2ښ=,G-G4AQz1xGkJ0dTJY瑏J޻K ֶ}F+|ٚY\ -R9KjX/wOkc}8INQ'+zi !E{]!jS+UJHأrLoL_cQyG~74aV:ꜫI>@P{fj.y2 ƚQi#v1ޓƑ+N)3y3xw#OIQ$od*AU)]!5bUewKnB"AQk3?^xpVp~R4&;wG\>B,?]_q,fOXd-u>-'^B~w]PnSL_I_}0ϯS Do3'Q-35Ww^f$<vEJ_AFAo=- z2M`s⯓o r~ݶMftk#NK^M}b2nkFr=(Ou;UGE3]HUP\.quҤMNvSD]\|`ɫ[jxρe4!vK v09vӴzPiǗ3fOWcgn66>+``6히igai TАb  ΦlڭoϦ37 *jGrbauuyziwsϦS1 s뒘c֎DYMAϦ37.C2$G׊<e7[vH>la??C<;~enÅBluH/WޕDX0ƹ%}z3&R2U(z%^m!&oC{2>.mHg{Kȕ/UAɫ}Xyuo^>*w~>v^~ޘ}ZtO=?j|;4WҸ܇.&.E~U߈dUv(gBQ> 6S7ȀZ+jςe0,whYJb^A8 e6<ㅠ#rPކ-̎yiE\\˂.[ k^A4sf.XqD8{ه}ᖗp+*\@9+h5,E?fA  Tr*$'_`7ri$j}$ہeсŖlːu<"T#=kr.N7^ ` ݁qof3chIdLd-"eTm8}bs߼K."?BU>ls\!p6MsZנqofm,<쫏l&%fg@=Xr W <fC~}h-3vbxŨnt8Q|mā E?&dNh4nw}σe ѳWϋK`v Fܴ`bF7'TC8d0"$Są (]j 6QBt2YWHZ5|~h1sǼ`׎\u`~_^w}є݂[r,x3v-O]c-ԋWH;sO'.~Ӭw'rfe*L"NKQ_ûiI+QI9^c"@[d֣4rZ֩RI-(`b]b,O"8r1%71En^S&(bYu ߆ZLfcl{j,36"_^}摒 GdSer&Z"˻4xoV7u0)Rޞ|̗DJVN ڞme_ZN% (1x>=oTJ٘ q %*ac?/bB'ΐwaCvai0¤]Agռ c+ؑwa<&AÄ\}@"n=Ч=:8WZigq`?rߞś-1(Q_ rMAGJVEp,Mfdp68kB4JF>hY|%1b;fw7O߫1ymrv9# c'Ogǟ}eMd!"Ī_Ɉ:e21h빨;bFIOuވorqNcR\{ z.y! M$gsĦ!;| {=k-c/]AѺgCu /Rݒ|h0Zvs\t#?0Z ~%`\W]=oz`]zvt[Grq@E׿:xyui!7A3(:QX5LzJg¹TL8^/ '^,Xpiz>Ox{~]Jvg㓻_$n\q%'}´g&̵VTeETU-Ԥ ?3yLbA$r SĞb4Q5\5_fISL, وt,؋(kL]0\8bBjc3 Gl#>w>`nu3]z#f^3慬]b-͹5X{Vy$ oM`@A6|xڜfz@mŰ~ҙ9{ES*p,n}IA) ;GtX̷L,*sBJ`RsKڸAZ+^NJx^>5NDQY jKpA蓯9oKFgeqz WT)A.B.Cz C(A,$|pW2TԹz 8ECzՁF`cD+RLE:e uA5i }B"Iy+Dy! RF "+ ܅]Ne"č \3ְxS|Ќ-TLYEXDʓ@ /Bc}4 5T֐[47.)Sh < t Sig?5=95n,<(8I'd;[M+R5J\&J=`e̽Tg?D^c3<t<#*Oe>y?^ٌBԆga'ĚYVh ։[H3eڮ4iHrGK~CjBEZ?}]fԬ%bXbD詜 ߹1bfJu2sMS@ XZB9rve1>蜉Lɫ"pmYi8uK5I`V·?bV1< cX}fM`Jl(Q$r0vL&F^0$WAhcY(l^|dQuͣ7|$ .^%%f]LŦ!+V(5㨁=RMe* Z׊H?<7\gh0z͘`?gو5Ck*lNF(Yԑbr*VY~, &,Omxrß+[P:h&ul |0s!n!1&y_c@GA.%+rF'B[L6h BW.zF=% mM9{m ǕꞜJI$ȡgjE͢[6D6<>y΃Ȼ{FhdsB =Q٪ɢeɷ4E" y.r?"-dZ;hCTeU+`}W׼Њ"bVW JLlm#+w\ʤZS}ދw;_e3a&殯y94'[?YQ 'H!:áTO :;nx )_2]r %#{U Zl4ߗ0l[c}Z+&O`=0ա_ʧbid+oa#7H5<ч^f-E kyaAx:"wxj5Ã{b9ʷUז&q# ?63?\ÊS{3 Y{l׺)CƢޟQ("\SAS0{= ّɍX3`jn͈-`I*>MRK-{BJ׼@a]BMw\gwZJ?wݩ.u@G~B))_8ĤW08BfZEAK Z< VAE=D-d+L^$W#/E'u!yy k{|%r[j9[JbĒڭjFy27"ͭ7/qΝ]H8p Y"D/Fr(Vu! OPIUV;Ay)NeB%UjHMލ16Ѷ~m|lckt\^d,Z;=X==v<: y 'b9>oi*LCgc|P[U͔3 T8pA"4) gȾ:(]7$ʟye<{,%g*7lT#QucOTzsӉJ )*ٝҥ$IW@99Vs_:֗;0رUܱ``64+#ºDb ,FX69"Ą'-hE٤賆4HŽxE6~ɓ#s3'5zm%ufvX-r;Ḅh>[()o%T!p-8(A8K  g!B"bd1CFQu'HV\#Ŀ"e0gE( 9~ډa ~b@ Pp:A$61l }/{s+`jS( x[Ph%[mjĕ*!аq uYd<+AبɎٱG% w-8ыrӻW[՛17[=Bc]P)EML`"mvZ0'0= ")coP 4OUQkx|` ÔNm~ig)ZygSvsn8tOVՈj4f<& WЄk M2<L|JVZEMđ*'lC{=j QYz |ёGX ;[jyqYsdw^}_\7$F[Wճ ϳg/_\^b?e:;ZtĽ/rK Kh|itHN|/ŨUnpr4:1we>;Zc0ӔӁK–Mbzg#dle\xl1?#d bt}(^9\Vt9ہ֢eL3K&d#( IPHpcu)(-"i3D,z;g)@ޗO8$shIip;! F$I<*Qȓ>Yr>'v pS={ko&(`G6!6JEk 8pԔr ;c[9ܓD:Ƨ'9Qd?< #ePf@Dpcv.x+Ɂ{j8( v:Tl:WecM-Rwr^_4!jN[CC {~z]tz] Wսr]$ ,tꏚB @IRw <\v &$*W,Kk %E$>jd$vE(,A=BzG΄KbBmMts5'`Y jOJr[[ Zա᜻68)'q*܁kc+oigAl(r5GT,f[u>64+Mdf‰)oz%Wr8 |Q[`K lL @/k^ᝡR45[en6>KjtiNIvN i-ZNƓ67}!hっMVNQ]3'z'n漄)j] =zh'ڵ ktC6U[ 팚qFhy&hw̑V9xZZ7hz`3f{k!܈VHuȸc)Ir=G;ݲnw鷜۳o?}W?~ms덟<քFM{c1_RƳu]YMa̿V֌gnk'?O8phuUO;KRy>̀?8,_vJjetx(ɿ6^Ce|L׾ȢZ N;R6NVHp)VH:+-)wrtk9W{nFW{͏\^]įDQ[ rAYv 79rw_}.^\mVwb~Nc>bƍGpH=.iQN磯pzv@aoE ƝOOMIqBk&2>"'w+lp}:AGGBj5 R*&#W }u>coW711>zyLqbJY\yX-MVu֨Jd+jdڏŚ)}QQy,ήUoAyvݕߡսj">2a> >Nɨ!}2Sh`&2.,ڮf=(3. &̓GU8E@8φX}yX&;B#OgGWJ$YP9ݞXaf,xc }1wO#zg>u`$gts:^+SN,F$@F4Cҙs[yƒ[ RUu~wYѦ⣏5k!~Ѩ$_pFFӶ[ѻ˲ *G;vDS8Xڊ ,{BĬ h46q\\eZ@hp /Zb qs]hƮ-ֻdfQrK|{76+uZ7r]]U(!l#Xp-!M{D mUG tcZ,HJ,2yh)Xo`EcAPTn]y9D@mIrZ#*F4^Sى`PGz~߫E fUZA)˾.%A[!CJ,KgF\֘e5]bv.@ bUkH81ҙIlo-fuc6 6:!3x\ 9c7=:g ;v8B`g Y0 ^^dNSƌD x4c$ Q-RfJ \ΑVAF ʋ~?=׈bZ[՛1HX{߹7+kƠv@{ML`"'E`f6`%I[Q%R88C⼓|)6#=+P=hv={{sbL (ӈ{  2Y5Yx^,]Xtb?7ie$NhZ+1HCQއemCָȞsL:VJj>Z lZ}ЙǬ@=>+݇?̼_eβĄ_'5 ^ގ/VLwt sr4ZFE1cIsEXe;jm s ɷ,<,ox:y"cxqObNM;!5r޺_WPLg }K[,ciB0Fc S)*AtZ|Buj0FcAbtxg+72~EK~ih#4)g`{o~_{iDĵWyR %/#$([ \.*>Z-O[l_ ى?euh/&LcWG!EhJ|i\}n5+$^g(A҉g*:@ h+x$QT ЉBd^Ѐ XJ1,!ĶHA#%P"F{'TVoaQx 9Jd͎3YS4SfD1+!l# !zB@bXauEZ}]׵Z}]km#ƱƈWo|])APlZ~^MAO7j_;}l0%hdmpZI3ڻ0v S4wA=yx[BlcB Fٻ߶n%ew/zdC2@~(lwM{/#U"[e'm.=lIG3o3Ƈ3d<.&Y%U'ChZ !RH HF:E"վ6u2-G.񯕊2ѵ&_Ed 5!^)uHHl]|x;>$L7:ƟGc[%s[ꢾggݾ[X~My/\D/}v>cmoo7vi  l=ˇȌK27}"ح,pQ%Co{Ė־3l % _IeATW_̯Cacr m_d ~[9XCbr$y_W+н%2#uսn+HY^ ]KAkV?(< kf{ "g]j/lIZ:/R(r5X>OKS7+_iޥ %O 57?,t# H`V77ӆ4^HgDZ¹4c}su40vQ 9xLlZ97Lr m\Z/_2xZ2ʼnoe=4\-'ݒYʒ'-HRA)^d-jSs%_Y-ƍJDơ%2.р>@Mӌ,c`>.+%R+J9Bڑ:J3ZeH/G, k7iХ͡S;RRx(1O =;j=#uXYNœbFRa]s[B kǷ{&SI&e#ijۊy"(ԮPЇ*n_cKv ́9Z"X(EmvڵQQz%mzn=lAفԦ JvVh^ngzx)<4cYK5Ah5)ѾG4G0'H*Bنv05eGGmS;R#=&j'hLC0OQwv@<{.jf\/ ޅtSYM@V;d,`ogb\/yHA4TmCŸvAh$`\Xmj߆WBz:`ߓW$_?~Y(yK޾*?ȐKy2\*'T1C0!Zb )-Ģh10\mEM´S\ͻQ1Yk _)Qh.Ks)I& u^mxbIғnVbVȃl0;KqanhV fb^{*hyt֤cBfdɘ$wdZ'Vӫ,(ΙnGf%l31oFŌg\~2d"TJr&BŔQ^O)ph!xԪF\aѸx=2rzyR|e"):Q K)Jio&fmK),6񳥌IBBDTd>|BVR%e܎o2>𔱵ksL)cI3F P )Ԣ;b!_>(=%{Hoԡ.n吱 tZ\;"f,m`C1V6̳2aBlUP}vRj4I嶙0fhRjW])n ͼ wom7__. Dp~fJ:ӝFjɽ8'@=(4~j|q=r/`.'u^_uҧ2tnMTOKtOrO ";\glW f%>țԈ樎rK=x?|d%~[2 (gI*$(w lu^s䓤U5Ac ў xu[j,!%@9d+' 4|>D l&Q:_r:&&H9Vm kU£= HÜ$hX9""L,T JxԆo̍‹%c{} MrED#c]<']vy;`alfJiehDvUN8#]j*O2躦Dh-i5R `K %m qk%'kTauDrkﴕc: N"E}.aOB pTQyX{+L$A]u  Z\BQ:(@O@hYJmrH uJb~xMn"`U=c෫40s/6=}u[U~ؔ2)TMTN҂^kV{:kL8i.c FkfNT-CJ$Y! 'hsRCMk ii>Jw3]Er֢O<,(  cs[{k fz LZHzSwސcK0;Y)¥ FkM4&p:iQݬERG]_0Es>c|.2'&cO?/Z}lvq۲K6?f*Ԙ)QeŐ73(Qa6S|т lʍjrrީfפn濲zzQ60Hw?V%ay=+_ۃ i>~d92gRa{b9Q,@ d۶ s>g+ǫIE3jh\]OzɌG״B/6O߮g6fyf}:?|kS7Q'ݘZPV15ﱑ"W:{Xڔ٘߄O|F&jɽqշN-|4r:4몕y2/O1}tٽ#D:ZCt(yKڮ:ڔLpxMjT`)&?dJ/o5X%.j .QʾD)D)0gH* 0)QJ;Y(MڑcvF:JVsqB [ }Rrڽ,#u^ 0gI* 2 -\]v&(QkqLԎ0O#je00!QCvڭ>%AX[ݞP;RҺPYFR]e` tvڵ㛶S;%9S.cghc0 b}jWjC)XِjQ-THaNqj׎o4?xjGjRj)Ԉ-p:¼07c)mq {BPה|&M_-jSI/y__F_%:\·L ,7U >PL|˷5M] 'zrJuM:T.Fg0cfۢ>e&b@{y:5i}^sظɍ!/iuzG)90UV>W &r//^`UR*|fx %NjNgCۦ8t,*odVRY|+ItKag)䊾p/ZZi C߿˖<Ȼo=H z}~8 F6MicKB՛~7lzÌ^QktcҺڤ܍Ͼ6v#? a񷒊WVWgF?|3Cx?LG~/dΆ N䠇WC9,;=⦬CxQXoUU rU'r^f\禝LMgpgs3i Io(j9*_4NmAxLou.#zNC˹ĻTM7UtFOs3zGd3ž5ow]1^ܼxn y%?iS@ ?.u,B|]L- D0V,"qK^:1 @紨F˹6sd󐭻~۷?m?݅sߛ= Q޻)|z0t5s)Z,1LĹӞ?9|'1F(S,i(d\0ҹS$8yFk7jS֍bv#e7 =&&_yX*{\Re ~wmEۤ'ndk1r1His1[O0f|.zr|ֻAY%%W U[E6;k~7qUӗ>>~.PRSPFLFrLEr\E"x us4I8nk"ƈVi}j:IXj`d>m‡ Lc;.TO?{W㶱|Ⱦ g>v]vE fӖˢդ5/fBı%IvnZr9bQ7:puW7:pub 9<4KJBU"D ʥ¡MTD h4 G,T$B$4$#Q^+|;$C\>@nZVBVǮȒGf(("c#  (66D NX1o5AT1 B!<5MHh9GG &4="YkULCEl7.4ww=T dih%3AĄp"c@%ДS7ƴ0')n.cC9d36B:ݕ>xZ1]c3i-ch";cz Ď]b&=*)),u'GGBPq)Pjc,CDc =ad m9 і'9^baUR&j<ϋJEvİau3\Y)DymlE[u/d^p5QKt5^)8CZ#taҍв;]M8G6FQ% jHIF*ϓµ&n*%wW[[Sʝ 2vj%>0>%q vH/8#Y=g$,„{b/Xݳ]%aL}sqB*KcKu&xµ Ci#%FZ!,\9V%E˴¹)@^!&V(Qy$PtGf,t-aLaL3xxGRJ~=D6ɭØuvUa^or|PXKu O%)DQ,l\2ͣK@ !І ^O=g*]bȄ*V=k3JV7qΨ8bF ցTRDCwgb<Oz5+t@H-E3*UzX3OQ44de)@Yfȫo44O?jMJ?/.®/_TxsH̹R v29 $h(x\'Iu$ T=DF~j.RD<}AFƓaYt\{t'D]5PSe-5`TNsoUGI?nV=VҏP'^UzUOU-/aI 퉝?30[,,Nahq 632 @[ c([ƃgx֗Ϡem>|>;ݤWw%0F7Q $@VG+"+$=u+]UYuQZ* ..h;iwE+-CwH ]žan)T[r z_dSp11)+Ė.`Un $VX\IDy{C" 2!z~G +I3=ŎUaʄڔB1ʏ;VEƎ+E?";w߹;vo Kҝ1]Ҙk/\xh*,c `I-zB/DRl͇*LN)Ur Րd$D="ۃ^2/_.ǒ8w_!\HD ~`b-e#zlp?+rL,xT3xM$70bCCtsP$,% vxKi?"D0*.JbфTHpBDy)9xD ԉ0J'4M:őd]6")'!LƜH]'E QK$d82X"4Fфr%M ^v6Z.0#FXkQW dxB@viV2FXifxC*`DIdyl;\E(i*ӂTᩪ}Ndԑ6u5]"Qo@ ^Il4M)EG7'#;d s>u%gtBA4lЇE႗&3@ b;Mh#uÛ)'aU#E+PKtXxs37}J_^\&E=rvɈ!9hε = g6嬐 9sYQNĸNwgOΞ-w3f'ݦ3xVX8d` s37nN-˩?3/:?gvU8]ԅ@uFQ¡2G? ·Vl F9 M/2{yl<^Ž? Myt>LkGtG'W9lj3/^ڙv~l8ʾ/~/ wzנ{sF%n{5pڋ쭨p&.',N7:LXs3M \>5Y\ރ9Kߏ-Zҟ @OWMo:N)Y7WYلrw4 +;[Uju%LK o?? D3|l?_VLrz<Hӟe!z?m\,ќQz?}0y˗(^Oa~o|/^G7__M\?|T7o`g JW7ћx9!wwd6Mgx:,_"5 眻 I3ioä46u~|et8x;Y8 V_'~'$ܧ(XN_Y:~9~es@߿dtc# Wz>c;ͷ=&l|egh)e [v 4QbkB f%+rE'0_k,vd.Xf8sns,?<|6-l32 ׳ u,SywdݜR}e0“0R #aTOWI>œM2s߽{WMmt76Z=*-݀|ȗ FOI{.'/..Wߞ`?m:opD 8`" BlMHbQS=u"L:C<&~A=>"ptI;EvdK.Kb>udKv!pqBa3 Mt`e#! E:$NX֩}[$}e4ʓ41* A7Gw+)ԣp- EKRn$6bUgZ Zfq$sQO?o>Vj!b% $lF`$E? W]f; ̺X6$V ^pMp Z`49sDzHJ6I됧xzr5ۄaJ `j QPtwZi (PT#}*}Oo4XOkb47{JSAΚ úVI! (ړm\RJ9miR?17* x\P鬟mTςVXzh ؅%. zxv[=8- Ky:W{:WĄSP뀭~\ɖ+_ ƄnֱiO>vw+ԋ(a=3n(EHS{%ߗ=l^6oB@) /;aPC]%k STFGn JL3\{c Zh(o2>JJPzߤr F@I]Fg fU 'V66NڬTnFE۝ *A'zB~?BD\@XϤ 6\Nhxr*C5s#S)MSK -ӱamʍAK[1”%T,qEx5n@2jӺn0A's;,Z c.NU$ViJ;qL"%,/ H8ac-f-,'2r[&ŊqOIIbc:ƈ38іQTN]J\KT,Tft y,6|Z0dDJBqB)FPbi8B&JfAia!ZQ8/ >lN`R` !cXĘ`'`,q1&!csZ!yJbB{8fB+\*ډ22~IBΛ.԰T<#CL˸8o䆟gBRQ`X+0 MSFQn-A|DK\ 31AJ(1%z-9/z}uy<.F ~En>YxiqZFo9hOlSa۷߽mVSEZ~ C0>dN[V?'F0?̓|MGk@aa_yy8㥞 k:g._#(_m*=bݠ**}~ˢsk $5*0`b[ Mc#!2+cm\"Tbڅ,,c2I\bDXRj(W`aFDR"7 'S# 1/%&~ N%HgAY9u 49k \&+lU֧g L `.yd/̲q )XDDK5彿xO?C2͗7o??32ߖNg&Ky-͞ %,xBF9OfnYۯ {qr8hTl'ن;Msn+>jaN['J!5[树@BSIZ؋w/LLbhM\vLGPxA,C.NaSS X9TqXjDY qQbc$VNR.`7{\VFC@ .58MJ!`&;Z7H$Zs*a,Ğإ&mp\EZm^j;]ޚ`(BPwu u[xgIpx\ f!Ǐ  iDP[,_H 5J)F]-} J+޲m]vG >m彗Inw`wd8tb090²yB hQ `;6fY[v;v'$Q9[;m=ĉqdG<+{*(iĉRUJ5t2 K]A77 |ȲFP"D_u43Q2>ǦWf+Imze!}oNlY/f`*iQ%wbr^|d2CӔ5>< BR_vHBs$6j5!k}$@Mm\{UzN<ߋ3 vҟNGhL8kS*m##+!b=TCۙ)6(WDŽ-).jT ^ϼOʑ:%&%' ;-`V>}rbKVX|*a z]!6O*_59|P >*!s|Q_dPDW)YI4NHRJpaTf79|6:,["o2 Ǐ0vQYQRLq|N/0ewP,9\UA>i#V{g+'Qm˱wDk)5Q!Brguu#+qoU{g)&j}1,j@bxeCEmgs4S/"8b(Et.ў6CZ8" ivgJgK Bs"C\A !Yw۸`)Xu!,셝$qK|]q% AL{]#pq.01U~6 ;T7Lit 8'0Ǻ-5RLK:`}VyJK-纨]"Ekpg`ʵLIܖ]T*KIvP a^BU9Ņݒ\x=^[9Z^жlBe\ Ze@ N[pw7J+VkR9WNj @? a(׵'嵺+VN9WrԨ#kI%0++ {msdu+֘L._  pK &^OR5&5*vrJ-ģ+齦D3qN͙`*BuC9w)rAI9֠T!:Y1qFhmFId`ke/@ň-v֪D3M*ѩ":q̓z9 s`lrjpbcRI!\&`M \#EQgOsUH xФ\鍂4JA/JNAe VCSۭ^BM> p/:)_E)K\ <"r)'W Xrw2a 8,F ǖ^79Q͇O~P^kdfُ`1w'#'nճ;|97sNueyx[`ʦ1xn=/՝ݼe>d!_nS::T[GӺ#]nI1 NcZo1廅l]9DcCі,iՓ?~3ӫ0Z6}qZ]ExXK9$| /VvFְ`BH߶ ΋Ȝ"Z(mF)d᤽2SRXN>ˤaeмc)eH;ٍJuYr՞CHi^'(ЯIg2[Iݒq9FoT!v F%B!#&u?fTI$hid 6_'I1!E828mc̉цHk:n;mLkynsD;WB ߞ6^;BN3{a#%).WXYx)-8V Mʓ0ZGJ˘)›buCDw8.> EuwjFL y^`xnlI8.PbQ1]"(S!Kw 6qQӆY.!T Xj0g2k#G˜WI}x{16>lTA R*f1+Kj2db?~ x}gz/ӆ u/c/҃s?]J/L}__Wnd^BJ׋_ǯT*T*T*TSy@[|r<3:amN1ӧ'` b`(aƃI&(қWCr/A"[_t9[…Y.?PEl~/m{^2D"LhBV9lQ˨ >n˚gFZMI[Wž+oV*& 2ē-O#)RjT/\B@e7*A35M na_#gZ^;r7RWw#~&Ş!d_+“_>}Lz񷩻@ ]Q:`+;`HdG{R+{D$ds%Vfk_+վ#i]ܓEn|*#6d9͞xWՇ.:} ܓu_iq\~~ `ګ/pk̒Os(C¢8P|73 cCzhN\zyZV#/7oy[\ #+s)Q({%;;,*'8;o_UUQW} :pER^[EF0BA۲Rvxܫ5 h憙Q&T"d`sa!:FX'C~ 3$uP'gdC"HN4@x6zhoƙ]tҖ6G`'}bwY_cL3Ǩ7X\*J"`f({oG5~ۊf >Usk<ҚVDQÿQ ʨ8/vKɋ\{"pOɴYN4DiB.ah."b3-,2"ܒm M73s9$24q)&Ss,%YI <Lj&9 9,X"PLgҵnШ &YfzkZhd)IHԝUw*NЉdWGm ̤2"w"빲% Dtqʹj$ܨN_>qfӻlFE};r9 묕 xHP")%^xn#HY%R6)ߤ&恠ŎHZ:ljΒKEQ` 1̔Y =,c ^sW,$J}ȗ. cXf2L19_ C#iRJarɌ̜ aL);ڔ x"Ml*ٻ%ϰ`C=,(Uq3;g% < <PoDIJR[CIPjHw_sB:uS.Ze;qܘuIiFY|> 1;9G@~g] ʟz>,wu@* E}r@\4EHFT6o-HMpeǸ˿:؀\VB.TI ?Tڢ=]{`?Xgt$6.+5 Zz[l[%l’ 0-wkUC dnx5(y;^/p[uruoa'U릩uU몇U~zK%{|ђ qSӓn1F*,?`DkN2k +; 'ǔЛWf7ܵھz)6@CNMM5k[:(IJ`,Z-R(jI DP)!X̑VI2RlaILOF7!QR;2g0qΊ<=J͏U7Dڰ"{ U-61gTt&s˭!2eg<NȄdRݡg^[ sGV@ohYe4'ҀzPF:X{S^@ OՇ2k։>)HlWZ欵4vؙ|BvM;gLqroh~?xREzꂵ7('\ bY8F@ҢdJNȤp\2@ 6aJ^4[nX*d!]YIZsu"ʼn=PM :'֒oZIm(.=lN)`m690nxGHz9DX ƌWb̎@Exj傎^0o>;s rFxVJRZzgF*!f}+5[1fHT6NY}TY~ |m|u #9ukŠ:bt֭m |m~uL wkĠjbl| |hjwkh6Cq) Jii*; #Nب.qK(t ޗY'u1gPM Siwzm7Cތf 8&Rp%F5Id%XSn?ߘQJ &ș .YfTԏ*9yDR4'`NdQH4h;܌`lc+4J aA_0@ʎ֗ۏ ~pZ ]].bP~;I&ߤ >yI$/ xTIEk bB2ϴ-:U 6p gMT@46d|)0O( D##ܬ<(aRL FśxY쉡2OoZMASI_:3aTo 抁E> $hnq3W`$ 柁'lwIR6BJY=R\OyI%f/$I0|h.Z0A(fEVHiJHĔ;5Ph|&B(L߽B“Y)95B}d[k>]GW)  _zqga.ߖrvWY.6n) HW7acǏi9snVIԅCCpPW4d!9 ;1d`щ<5yǍ4o@jt)[XVo5wLM[q9^x5Q~YP~hcq<䫚JA͜q(_yTчOȄ1,x;x^oM5d頑Jգ3m=tudFVfp.7-K%Nw_7p79ݼ;^`JE*jà^x\}޵O*K]ϳ=S' .=%CUb5/lUtp~ǎ}UUUUb독gEo#yhJHcNBr!z:JYGC*Hg"+`/.zS/F(6r ^e}~1mވ_QjV^'ȵ͚MMp+z[a?;Ȥ ֣lwϙENy jtd_:J*~\A/:Wl*zb"g\_sjy]6z[=uoel{o9t=m*w}}ʘ*lLNTd5e`I75e|w1Rڭ!#j -YS:kPf'l_֔aɷLW-s?b/ͼCmşCy<Qd-8L:A$BE,g1H.ղ C+9&C-F82׺6b`'S󈞹`x0Fq¢AlIL:zI^XyiD%_(Gg>xX́h<_=cױ'Ęf@&Yqc4La}a g%4aYA))^BKV<^wZ{/6ѳ㥦9V[xs 2(^EF 4cPi4eOZgy-R[gi|m~˺RºbPUĦ,m=Gc 6':K#0hz^ǹeݐi[+5i}vUGr2Z6CqnpJTrJ̙Dؓ/r8iJޟB^N^x/#m.ZmT\Y|Zt wqG"Rˋ]t4|nX'woKg]|׾]gk*9<_7u~K__?/R}qB}Kxl, w=['Vz)?$]_Rsq#Y/`_CsfIv_}X,9 7H6K[Ͽ:5"}bk__w=!gZ/CZPpːk.rӽx|me@I.E+V[JvW!7 cҟ wcR9[89GnJ91 =s!ay)Ֆ $yK %gXAUҀNF P ۆ-ugK Rv=SJ3Czq}{sk&)kCގXt;[+(}~#O䤟٫c!CT,JKZ;hsozQ($J9m$JzC K4*G ܢDR,<泦~f[s*baZHPeiTs'sA{IcRf)ąLltCqP&˵;C /`',@oHla6 .^.f{-&PtMo-:eH}B[U2z_9f'CLj`ִR%k< JNH):q-ֆ|""S|Fv !hTĈNubSDYo-Xֆ|""SUPMahDDN\ J>g-hւ|"X$`MTs{*?[tJc)]:׿1r@r<2 L}Y[i1縻pcq}+haэ4P׫ĸhUb'#q <j7ֺ4N'j3Ђ{4E'ywvF l:O6hn'Oil@#]Q^9K O1d ɤSpz||a%tuʜs !$TI܇l8y9xp,*Uk,SGX!Cb`6pZnZh-Zqj׳Z$=jWkC}_]ٍ ѾQ<>}uI/qi>@· 8z:e 0d:em?gWIacM~g a*qu SUgJp#͛vRuhʮ_Jid,-Ko Xm!A D_Iҋ<|S2 h0*a~| -Gj5.D!$.$֗M¦R6HuWuSq\7æg@Ri*R7Dw*rrS/ d=3 G aUH8j9U- ^r4!!x8+%XEJ+DgOUd0%GS=@YYL֥J!*tj.o@Hk]3m֕f0(+Ycќ(H(-c׀:brgxfI'9}h.P |V26 yd_b* 驄l+jQ&ڨ=yu0g Ii"CW18j `H 4h0 xZYJ^$n>kI gCnVd%:/I^$5֘?$uѤ8fz_.6 rHe3" ֒$gE84!4K+XX5ډx3塩3$*ŘEe)od9! NT_Wc5+p { B%=RQh]l_|{A`Лa|7ĪWr*4=TלaMytp31~_g-(Ȏvn%Bqk1}iG[Ae"rz 0v_N%@6@c#r8|$𼜐[ؓqrn6 19Wz1߻N@ ȋU޺'7^L=X(mSm_fy9>  ,XZ@B@gAd]m\pAzn ]p;QK75nC_j4TVŨrޫGA x߮}r)-zf.??ʪm{?[$wߔo`VfO;X~=wߙt<6[(Yo I%fWwxPbk.T0QԁPW:%FɎ@ Q]r; DhKuuj[>L!Yy2/2N>>} x$PN-w+T7Qx=e,}7@ߪ(OvgSEv?/ys ("(/4o@N)h5%#IH)e{)a}o~2.\.ϸ=jHn9Ѱ jdZ+?tڹFx}Q|v읙WΑ0Fao+P#VJoKk?E o#! $kA軂ԘCA/-H.V 1 cS(`+S~9o%'_Ō^l6x˯f݀$T<`uF.LΙ,&PNiBhuXzs3: ('8dĞ[g:J4Bܰi2g-/iY3m̹S \#rcjS@as( &˛:D:y2kBZ(g‘c?R3\h0kdP%!m2Plҭí}>ݏVبw5 7{aSnt3luOF-'v4[<3{F?ߔWR(gz^ng>[b9U1 nmeg&2viZ?ƫ˅jOJ7a6ܲeyjIB>sMJ$MTn<1jiYSnFk$3$pvCu-1:FvS?-Xֆ|"\ ԣK G0[~->+3cJW_RXNFJ,7j_wR=lw<o4ϕK`f w倝c ߇v8yS7ߦD[bķK7/|L g#pJ=աG&_cTᛱ<'ВUlwVJ$rǟx WZET2F+Na$_RHF]H+~YWVNq5zen𲽷 /F94A1 0 ͐SRK2H_P%j%*&d6ʪP{q8um' 'gRٺC#) Q^Bj1I}$XKwnjXkG̻*^ yu gHB̏c?YxP$G qι.:,zA!b5d7?,P@vԣ/dlu Gq}Cf2>}WC P? 2-NA8k(@)v5K^7v+C  S[c >e~f2=WEѽ<1QOT(_t PCƀرigMۑjyW,;hʞϷEQyFz5qGݶF92vtv=*!b}0|C-jzg?D{Z  L: 'Ի?9KH= EL+Е\DQyeS̾Ж~ m(<0,/aU`:$p gl[ pyl t(8x H=˯ r.C/ɥFtKtFKI.r;z6jE|)m#ileBP>dekk7kW'_baXL(QI忿3 E`ptttCk7#cPH3xv[x{}7>DN了VP=鑶]!!`!*)e`PHS5m4֎cЇFyTxQzYc'MIԌvv@InaV'eVb*ŹFYD"ϭ94/ڈ@B4h9&WI7@t3RYi AHR_,)g\[©?Q,D+ij0*r̥SZ"d“*f!!'W̩>p_TW)p[R~A%\ީk坊؇ejYnw3TCVL3""s PX J'a˭ 햞A`$I UKj)It5H:'A 'OyT0%{Py%#G:"st 9`o9.ilԀJ&rk3h~F ˸,&yZv(`p\aN&nɅ΃F _=@AC˥PQﲕC{3ѿةN?*'yV9]2:\SU@wj!,Qݺ'Vr X8ևa02&JաBw2T"d)!)y|IIiP GhI)dY-kj,}:p"|"z`-;ɢϬxBj!HmBU=P߉B*upPO #N09'8m59!0f B.!$ȍ~ Gez2T(trOur0l+9W:p:--@ u*Z2=) 7E>7_'WO7&vRV&E ȱ@S74 ;㇐GOՁ9},9hsB0^6:'&-wB5jDrP`;YpN&9eˮ9s&8o]b`k-ԏMYs@@aؔgYwvi3Հ ٥ -‡s>Q\F5 %5A罔Àa+J5"Əwt@f`ؼ; дޝwMD AԂ֝jD,_Ao:f ϻYbI`9inPw޻z (Pv P"zhapȸ7B`+ !?E_'.Kc{2&|+e5\#m?j1uP=x!v?>ЋЋЋ0/h'j9IP&5 CR )7H41PY5cpfM<5wW௫;Q:ⴗJQ苮"_x1 3ڋ鐅^19s/}4F&啥ZBl͖To)b[t eF&XB")=-vNXhZ+!!$ICO`F?@T; ͽ<;RA 'N5dAG "0@DDօ3Jfu7a69/|ԟ/oUV~fuwdKG-Tݷ3qG^89˷YRуu{x8Y P51t w7g|Rw#wxp;d6z8dźo~?=$|;z!~6,ş7A h[HA9"nRזJ1+rrR_Ӈw|2 7ԅ;exoOI)kF{LE֝ ĤRX \ |ڳ9,\]^r&D!%T146H+-* R9HYJ JKe_cP D Ulaq]SfJX&@-7,V%L$aL7 9ع"t : [}NXEnHd51Qx;cȗrCn" a7"8' }6xxMo&?_k owOV/!Y!iA#h{/wŇ/Z~_#}2^;$on)f8]-?X/JX3`B-8LA,8%TkE7bB|=7W篴Yʡ}+J"/Fyܚq,"Bz)2ł0\Sb2jL.`!da^ U䪱b1f_AL'Tg/SOfΘr LJd M,S)$dDAΒjL$I̕q=j?5|,_oIjA$Z%Gi}9m׃^y+klto2;g=YM8Yƽyj{-oRiB߱'?L,ټ?>uoG64ފ-̊-:ݤgÒ-9٤pw)=z'\$!լ[$^}L"Q:}ڡMA}Gܩ$r [~sA\|"Z]?(ܒx"RG< 뎵7[g%'xfygX^Pخ4bi#k l2V,zYLjv6{)Lf.$g9JVMQsd㧗8hva֊.HP2yt^4E?~ 6^JH%9s*M*M͇DvFAˆEE߲zÉ=;Y.hAK3V{*1ZU&cE͘+ WA lݭx 5WF W #u?.4h境(nԺ:n5̖t %JGkO:̯@ӽan9己+(fB[MAQ((s`a%!%.tu||`Q0R>O1&`+P@y=$BqЗ1 KZ"wbyEplyp3O[؇e#Λ5ͻwv%ホDeH R]f0>PA;?~c)ك 4"(NLPy}͔6fh״R_y.)̍i -/niy dD*%ҍʆǛL']By!-bHZc!d9UGo;ASBxǩHtÔ؉ұTXtpx^: h*J M)}'OU(C=Fm#h#)qOKLyE<YѦVO*fBwKBtƱ )8Co(;V72bg2g= ѪuvۄCxEcp #'ء)SU9o j^Qxˎj@O&n8>^թ$Dg7/ a@9wG`bD y*`^);/68>w8絼WvkÜ1\3.!rMu^oJȡ;-C979U>J&,^-Zv -s2SSo\߯zy ή@Yb6i1VT Y^^^y^=@OGX%FqBxq(EAB,&D$5` F.DS ~e\[,bQ! }UHi{ld_^Yp,.݌ x{S O1>üOn7=@41#FRwWB3 P(/-BKN⢰t(HC_^Y%:fE=/S,mL`qKpR$ \B`\4aqc%'9vN?r,YjX%,m9eDQ3 VB}r$d%/"Bt**:UIDpM©M4UFr5H3uN(L݌&=ql JJ8lJ [ǐѧ) _Yx׏p>M|݆e=4_s~#X415'f"}߷ϯF.|~n<_neq =X7'w@NPv%p sԕ2[90~G}g_Gߙol,-/_D]` ,]gJN"@=U WE3{@p1tH=p:"<]GFT2fg2"%t[xAahFn./&T"N]̾gmy=9nzc솵PjΗt8w1\dz,~Q?LvC$LkF{_xLB\@P+\,w?MOGҹ" .b$MHQKTb"% W`9%}2! ᶀTgV/6t7T`kpig!H].]>Na)O&2-P}.p_[Kld8ྋ\U$b@I4N ]ܡR h tDJ6(*tϜ%?hu1EO`8L)Yx5A!1Mu[Ցv |l~ ]G*uj{5T 7G+hq`D ib(0Lx:5Bᜱp nGʠ u PnZ}F D>'\t'P\y'F*L[JC>Gb "Ns]2{v؝f(Q@|A{ღ(5p.5̳ e > ߼n *MA.tssC3G#2Ptowz|E%EOF|3_ؑΎVw" ,P㦿}UqJ>Ǟhy9:O:mtL[6GDs@kו\+W Nx{3&o4X;QmY'hw3(!θ3;LdQXm6a<o?ۿcYaNZWT,tr՚5 Sh0ۚ_ãcT.a oL}.{=%izgij0zUOΛmY})mںwI:LcF}M6ԧ.~I*g+|a ؎ '#0;ЬyrCb}0yM! ŝ_gXf.'p(žϴ}3D';!2%yLx183L${ER w K-97=pr A^;a]nMĮ/q]0~1͞*xS8We.Au_6JwU KOCdW̶ /u75,/{2f0/n`㿃 rz?`x}2;4-qiϵ'X!B153=1 q~ 4DY4D ?~ʚ|C>.OYYmyi"@':qH NR. et5PF0(gZ&-Ϣ4}J{Ud!@J3J.=XdQW"jFW,u= *&hRF4#H&4;JC5>(jzëKW`Sٓ3<u"ǁeA.~ ~)`> gz&пE{:#v_rq$U󭟛uoFc烙?Wf~2z]}g| tjn[wh=%I q1ռo9iG{}6v?~o }zw-0K7|A~Yjp};Zd ;D$d_[:rS*%Rx;M-TfX4V+ ;Dq}t3-lςl=z/dr0?/ۿZ?L`ףT&)HNed9!~?ۼ {|&3->IQ݃(9W{kxϾ? *)]7!.T}[c˴C*l)cTe9'+-AY}*> FH51Nam OwIFb/CRNց]85ø놭֑R޺<^y,=٧C)0Z w+*j#PHG,W?T\n0*Qpu*,^v(x5[:٨!޺U,%+{3r*$"?@XZ0Kt+ڱj1}c^ooW-ISe>_s 1W +j.Ex.BK~h}ȲG?EB@H4gs42Dj<^V^SƠQ]Dc䳺t5ZWDL\8'F'|FGh%242jM L7uXQnA~+{@n^W'" K6ovO.8.TXwS5X hfX)|F9T mHs*M9My92ÅF"h2pHpW&僑E˰ ^x(zjSy:_F 6WIG^CJ]V ?'^=gzO߶(CJ? _e$/GބRŴ7MFbya/+5bRc-[>Gg m<]=WwWYOP5N ~YZTj<N_5TW-+صVM _y{&G%Z k CD2Oʥ eC"o#!cާjor%岹Ԍt6y l`|/ljq9{n2K^o|fg;Y\QFml16xgl7_AF>b) 7wA`ͯ&N<-J,̫LO=dlqUjg涨%&S!=閳amhFB>s)Eسn~ -*:#4mMhUք|"%S ]}LoknTgԱn36[4U[hL][75XX N3X 检|i4U[hL%5ǹg$*i[,UD'uZbt4Ъ֭ E4Jw!֍ y -*:﷤sݑ)^QxhP|%hEzS"`5cbFd~yMvg'N;lM3?Pa,/0.FԴ!R*HaF:Cئ#AQzMDnc߅ы6;tqArms+nA!=dI5vh.dqhJDi玦z ͊G\9LGth.fblpYf&0Cʣt4f׬5kw]55kӊ)\r%P ڛML޽>yѾ-0K4"`ZM)!Bq8\PGhv- n!a%T0pbLDSђ/zJ ҦmC>cӏp w c#qdXUUQ]ƽL-(I ?nQ&y7m2b}3\Z?SHw 3Ke Q߿>k Hd8Iu/G}$8_ܾ]8p>'d12?>(q0C%'$BKzL_ᤏ")1{Mb"ӝ(I(M@ u"I2' @2ǂ> lE{A K@iY4w .;+J6`JcRbc\ <%IdQJO6p J%W`(RXĶzc~#cQYaCqX\SUJeMT"nZ,PA_{/! tI %A0ITT-u)}ogQN)w1S0F $gz4ll/nz/nBh~8`܋FC H ǀR!w)j02[pڋDb&xC&{SBˌ,YZ1C^j5NHNp&C ̙#$1LZO% >alB"+?Kਇ$15') eIJ1s1a`$B9p3'K` .e.—dLbK'18QKԸ(!3lXD+5Ȝ!E%Jx0N3,1zDNR4ȫb̤,>"ٕSJ]s&QUON"b3 &y.Жvd˚Ͳތo?uB^s^ 8Eoa^^לD\oVXSؼq;tUo7t1Lbb>B"˻߷¹.<(͙؟d>`FP*'v5u%mا_&BUw]{-Fts~.EUd ̢H-v]ҘP[K ;4a\WbtE?pMI~s%}m9Y3rݐw,*>9gX|kq}&,yTNR҆31Jz`(|<""jEtHZ`mVэ*^QK/GM Ά9*X7JVALޡ_m+[Nd=?mQ7c/$|y i[vcfٖlRi-Uźxs# 9܄ JY8õȳ*AL2Èx=6WxdEGmyοJ_J`+?n)-4QUpو eĻ׷_Mf -8`"5 L!EÝk6-eJ'=o @/#=Hp] S !bR]y~?Z 7WU'S}#tͩϤ-bt~n>3 \r,@ŀAΘBr:(F4 oLşSdOf|IȍAZgT~IH`J'EF9'BsNzo,,{8 c? ^8$>-vht4&$qdgr0/gTsAlbN쫺,;UBI1msD2qJq79Rb!{>1\( `]b} kKu":b %8F3#tM)1b9u!{w_!&ax^O}zL {/؋hy<1p-BHi2Nb;rٯ d4\R**E~+b+Ǐ9 0\wc_k7O?$'Bu/KKQ{PԊS_/c,0;S8A"N'e8tA@'/9 /6V]Y#;DQ_S;cKQ7*?bs-m 9]9 |ZiOGEw V\.vg/HL<'ftVJqU>%7M%u OuzI `Sl+ɰs\ .$ξ13ć]JZ:Ρmm7WfXe*pl@!jNjVz.tG (dHX)Up_yϹqP=ynu5*"~Ѵ5Fnzsj{eI`FuTP9:;z2i,^:Z!1gFS^1=W(u׺O4v *}i5"/.Hr Ilہ)C<Ĩ¶Av2 EM0J'w! A_~_4>wB>fUKHb:~GM.h49|F'+8.uΘJ,_<D1<8@r&=LUfb{:w:L -gtg~7 j4+;}-Zu].wŕ1SHc?WX` H0e >}0o sJѴrΔBV6C ؂wSiɞEGC_\ZqKȰ8 ŸQhߝKw؏eə3XVEC^8@$cD4a .}@l$4MgQ2LV`CT5zJ:Z4re׍㖎`V (R̡u Bvhw1wӗbxJSuC쒱!^w=>,O+h%EO/zcQv׃]Ԃ%?ZUğ|Vjh/΋|I/W׿ ڛ #vE~,;坞Q 3l䔺ڜՕK9!4-:d^S߹˵zzvGt#W+7}+;"]-G!$~Lo /ݜBkvXr17$@=ܶVX^ Ҿ\!K>>oN.PZd墝X! Dޚ/MƵjUOXT#y;B4LJAmݠ~FfdGyO7#^}LiH{C"0hy'ݻ}<9Tr㥣(j$<~+xM<҆ s2%幌R@ES$C')&aon2$EEi rX"uWN3五&M%f9 4mAw !j:IC7e]DHJE@6ia&2$OebDD (F dc*щMT<NJO$b%9Cet"!jȐ$  eX`@~ؤDJN%ocRpcΆyR20!HHB#2jiDAΘ:b'겄 !VDM8`$JpNh.QTr  ֝ZxQ4A_ąkV6aGqD8Cb(),aL dpF̔F?w`w ve;%ڕi])2S2332|I.72Oe^,{郷_.UWڮXũ< XxUOvKRL|VuWf0*due=պ>}Hâs`e17 L\gQ7W;jW+ˆ"̉9M|PQ7$7yGfI -_ Tbfbfܑ0Mh^533K-K2ME)| Ve/A"LvYvDu%YcA>t&NOzbm\Fۋr8 gg0t[iI4dq`nyegYj_nagw?qZ|+i"09L`R("܎voh:J ƙv#wYj_`-7> 9{z^_.*#Kj/@slWz`e02K`:9'o>;DKm92y\Cd\xwƌ7u՛6w#-Ϯe;s?]YosANF 1 q(c$8A#ιS'a'mHW,vf)}Ѓp/myJSnY%e1XTKvmʸ#22#B)HZ6)Qb)(ZZV6kqh+R"Vz{gsJ\,%J,$g/(( Fo .!LV T BbQvB~IEU?w"EO4Y,='F,#W@ łpI{%.b UE@WYO-7@ c\FC^nypuM0`1*TYq?V #Deȡݳ1z)5K h(`)O"et`APp6{"I> LmD(!z0`#?:y@ H8k)4lbSmwӴ4w_YkdͲ5+&_%dtr :ʹQF!ptzM 8VNI,dV`%@k CBht*+`5&/.P3rQ\'O_誶Z$u!'ǙM}xȃhFhr'?08|?}cn|7,-_bt3ajW}PizixQժA$h Ϗ:Fnv7I +F.{]5]YcE)ϋ0r:v$䕋hLZy#\jB}fny)IB(`+- En /Me)G4,'%Tȝw:2e_SgoI+cyK3[ȁU*04 #YPG%1ç'@ vVDP00-;suw(k{NCpk#NC/3k˥nZ5?=4GKn>RNM[gibm֭\ֆr-).>n4?حYpF['Ӛ[h4 Bd`BȨ82{/T0E#A.%qFP 4d$0ʥ2ylA9(Pe)qIK:NMCPk!/4qk4_ډJ-~)/̌_ڐW.%2E8>H`ݺGhJi#:k4nEw$ v_\ֆr-ZC1[zGVJ)]uqQNw֭\ֆr-]3ōvHw̘.(H; F F`^pjyXj,pH /㕜]uv%pT&ވ Xd&h-Q ;TdaS؈,|@PRQZB0y*!QTZH5g[7-*mAB^ɔFZ׭aJiS.\Q2p x9f'sZW.%20:sͺ) 9SF.B՛um݊͵nmH+";L{dO>qk4:`g5BzƢA4 p%bX"I<+8j휂Ja1J"#5D\d"(0&FbH,XXr^B>AAՖ vzz੮e:/(n[&Ђr-)>xOfe\fWH;Lk4f_t'+(F4] EHvqY7j{n45X>w]1֭ y"L1 vVutdU2ũd\q<ސtԽx5BzW[,GXsթ>,3&{ `緡U#y7 &yxr{Y3[<8H%]郟ݚfr2'OYNk /\|ր'xC+UU>|Z@y-#7m)sX`2yB> ېTLkG[OƳ7B/\30.`zy7>J*M'P4ŁǮh :yqO]V}Tk—2sR?:}`i'(Lܘ fl|ClriQRq,d/chwUG璖 zs|{R`4rk14o#`Jn0` >S@aY"g(n_,TLٿJIw਌ذ(uRZ|+m&2-.*̄Z& 6_PucK1780ÄVx-B$@hQPS,EkCE)#(J+A6,v2῁ﳣ4 N9VȻJflC@3*Urwe0嗞1 H LK1@nIت|2s,A<`fyq&,L´ ԓk.7p47[~G6%B鑥'S#\)xOnv{7;i6tt9ݞ`0ŧJju2Wfv}ZI*Ў"zay1J doώ aF=ŚǍrN4`#&qE jhw8s֬eE82,%yš[+j)?MQޝ/ @5|(gN\IR\6A<7{Ud]:\?`G$3^]\WzBaKjSb$\\Mg/t$ o+b%\\lp,a;d(l` mp.Xaݐ; L0T![Ir֮suKqS3)dހN{׌{0䣌%Hl`l-y'8],Uj(&KΕ"x/n|u${ח6`R*G}i3 &U}ɻ~С #\%BIGzҎb43kxQz7^mb҇j%UL-6J{D,leCq5\gUi(d &؎LvlǏվVKlOvO9C9GThP6)6 fEz s[Ħ]W ^}>]*Ƥt7-{bR's_|x;|j ʧ:a^7l 㛰\yp:~= qܢ ];oFf&@bұMʇ18:h~a6rE~Rn|gnnLcؔc7)Vz}ۏoaŐL/. pX+惪|4jT.uJb?_WK: f)j6 `.DtføK+Lwwվu/v/n- sݻ k n@䕢e na POöE3>krL/Q$GX\0C\NjF9.NjseR~tq?i:_/?!'Lf~b=|}rSԭs E).s#}E<#o~V8E>FnUsABECJC$h 7 ܮ|; 6Ťj)n,e9-D"  $ cPYl&VmEalEnE]vQL_djZd0Z+ʫ[~PKo_~xtӽ5)ZFW!NRhQrqC%\Ȯ\QE8Kj.4*[n9sꈥ[ }9O~gQ:V'_nÝ.|o=c,@>\BށN_ kfoώHIGdX|wB hZ~vagi;#3~? M%pM9>;XARHr4=73@tu\6le.r($ggz.X`h YbBXtǂd6Ikķ.c5-oªXmh 8PHt}h܏>7+Mt!3j{ ` 2"B1Kݒ. Ҫg],=Ť,ZpMK= u& 1RofE.LfWDFCgr-kBv$ p`J7x&S~QG\BK$"ZPzDܐڶ8n;,j$?Ţ-k|%"%f1jk%",PG,]GjOoޔ7C;zź pNu_cB:i/1>kAnN|u3(0") VyyFAS֮e!)qIyRdI}sQ_?lwЁؤ>^=>W"lh0%`vpP J|5l#] cPZX .uZ99p }5aі5v /@(AN$1@ )̬2'3LL@BVwRX`XtcC*&垒co:ڔ7?{ȍ~*S£4TTn+˗}/*<׼p%Oc(i/q3CQ2vyE NHo(0ʆ V8Mbo3YBQk qM8L.!s Qe0ά:D]N!>KdA(VHf_fg0@⣁X_+د5'ZdG<_r]N41z32EEϔ SLYL!p< +NZۼHׁ:eGt|;5f`Lc_emXNENuE^A{]unQ8/H C_o=OAhԒ򖛬ѡ"z'X9`¸uNuDMdi()p2KyYeV6$t^xɧ*gFzrj"m2Ӌw1(F{$ F6Ƭ {R$vȒ^qjIn6 #V}xg4;Q*1؍]sTK 7p mp\MeɈH]2QI+pzٍCƝFʚ\oƮhvb#ÍվR/"ݨמdh8Zv[tf$x[PM_0&T[U}6SR= (%]kS@~+/S|kuSeEJӬ/!geZ6mBv伯 ސᒼ2$X udBZR[%hsJĦl\ɴz&bG N;^ӳVҷ[`Rgh'3=ɨO<<# p$C9dӪD]0G2D2\F.0jDsnSt&uSvQ)m^fT5J@Sߣn Һ5 *r1EJbN*U~W'DY9Γ:_,v1܋jd:-E.-.4j;͞[\\aSnL4a&[-w"CHtf⿭^3v0-;UGgqL@2fz8VLFH=, *[@t6e9X$lydAοP!K#"L,2Ԛ42^hgƹXn!N6=0Hҕ&E+Ur=jЖ{imzaֲ G#_ A~BC{۸b$xyx@[0Nbcs!,mL„G%:L| ;Z`B|Y/I8jZ11  J ,^aцoc(Z0,fZYvz/) IxBv+-llg(z0Dnx>Nٯ?Fo?7B[ͺ^\s 疫9|1\rڑ e:ZŽVKE)Z&Iyم'213`&mv^zy:G;U~:f }R ܑT̨+ᾈd@D:#Ҧvq ۣ=2gWCw 3* `lu%E$=Hdeb^(Wg@qo#;>ip!2@N/њYsKld%yh2( #Rq#hJ u%Wl 3-ȒBF) 2Ys"בH!nxrh{R+s%SZ+le SȳcJ):$x\5H!S#ɶ ^͓IC͓$@MqwfBN|BSpMJX.A`N גu`&8;ZAgt<ZW-ibld%Vݖgc6ϳRhmf30ba2$ѻ+X)~YjLnF|b^{a`i;vLqb`VySiV'1Yh5hr{,d9txMАah.Pix4hf"0&VS+Rm5=7}y.$+~VK!|G)*i)םo))g4qIR+&. 7 [bZot>HcA?v8)Jwڵ;jy *K<4Tt^*c"JqE 9E"I$;xR2ܸ\^U|u^=k\٭!4pLnfn r2#kQ#D$ózv5t j(ٲ@^Y4c2 cN_ 5B f|0^oۢO\P%om"; 毣l/+ه۟o``6*_r?ݥvCNfbQlx٧_5>ǧj%<+w瓞IL6FO/V;IG߅ žPjH8$xZ u2gX[@rT7VrTyXgX ?nQ%) o? QҠ`t)F!z}i1!Q%zW`;+vfgS볖vs0Xʣ<4z|JyL JwPRޏveǒ9O#EQ )~Ai߽͇(]Xμ,誼<ަyi8Qr&Vfv`ZTC c]ܓm\]5ɒ{de}L)=qoE)_ 8խN|w_?(_/}yYt&Wvvcb0=<8*'t--wr:C_,(c4l˖%Y,,ͫC3{M 9fQ2Z~<{<>S bqwuyɅ`BE " MGp7Z}11/,yQf;l3/#klW̭=CN uѢrL."cO}X/g*b3NgˤiJakOFJlM+ ia5JKWb5VY[9=o 7FMUX0bZwFZ0IpkGBvhl;h[ s$R6x \~f&4' ZB9@&0J6FnP9dҭ\9/D nA͂sFpKxIbhT[uM) aVHQ+Op)")SJE0=;yKxg w{*~D=!wU;48=&W#Ş.?u{`8`kɕwWkGuP iVEoV*Ar_قONM&=PȤS]EP <a-iP qIɎ`*1eBfke`‡"Vk! Cg_eG:(Zgutϱʓ8{X*kr@wӭ%+C3s݄ャ/S9N1zw!5d( zw!n9󽙏o&3>_>Yڏyyݽ+HܵئkW3iӵÇvm\ǮGu:x{jȹ54t%P߬1{|kuvi_鉉zZOfOOD9Ѯ1mxY}Iۯ{~_?fO_D$enoa+|sAm;}v.!&FTt{'Kȧ+hLݥtⲍ˻{ @m:IYu:n$)?xOBB\O\?9ؚ_]͵6t{,Qֺ掖Շdٞ▶ň s/lǧ8ç05n;,EHawɴ-_`^qֶf"3sVCd|Rܞi/fyŽE2dlY,ƿc8gv cr~KACrϣ[{@3[g㾀(+GO Z]~m>|X\u[HCcG[ֿ|R}~xK_&m 3)[J4f۱.XRRiw j{SN/4hQ:5NkDܱAgf7Ɓl!Dir5FD9t eT y Ҙ{MfĔ+G#)}EU1DŽsiS:WWHZJ#:2HSgp&}P\GN#KpgG 7%;.܌Pmd_HDD: wZ{]~owJ=F4^ɶ?IeOTt|ۚ|0adt<k#JP"Dε!҃]gf5"nf );35"ǁEb*_k^"Mɹ9IʡJKKC$]k 4VARqع9H-{SF90Ɛi$0O`ts4ͩ!RD45lp.mMG°ĺջ0(P7[㨝N ! P`tZU0խkFmNX00WmI_UR!QVa"wtU1U0d$QbOPhM\:&p[*8B"~W=iBF듴 q.+ 7hnwTE'ƨq0uqӝ0sɌMfTBJ}.)RL9r!JK}lȱ9{n3I= }21 _m%>t[ȭ/|J娂Mjk[\#}w]^e)ł*)5 ^B+4x^ui&vĔ^?}o~/TakV!3?!0*Һ,&1^_P"!x3Q%5Fl;1Y_p^hYJ)aTAԐVȊ aEQBQ`^ Yƃ@@0L TB#> "6p49"8b"  ,A`K1E p%(oEGUX-P)a (Bp-a d܃K+m XH Ql䆖)AP;@*H) %s 3TтT8nRAX VxꐪT`-IwCbG!+A fk(9W%eQ)(A dڀ*'\+ı1ť-+lDNTJM)$Je*pD9ڱ&RIg%o?O폳# MO}/sW7C~u_O}, |lbE _}/ظ O]~-08~fU~CalGxh&/=A0Uܸr&ŝ-?vK}ݻ/vS?(d<կDSi4mQifGW ca+4J;%k*{~i,Z;O Uy~i Z+ܛ@ץ<%;ak)y>}aYUZZ̫k wkf]&[Nn9i`ϣSndtT8x7-_~rk".h~ͦ /Reu օ@XS4lBp - Uv d/ 3aGaiu)uV3-$"?'X^O^O_O"i}@b}!$Bemh$)yvbb.],2JZXJuvLj#A..VH_]XXѡq\06YlI%܊r`K-KM_)FE ~]]ib%/.VImvҗ{^ bbMŚ^nfbbTbcQ. Y-ޯh/[5c<.0cZUY-Aʄ^5 $C=\^XQ.u5V-`R1NDJ8WeeQ Č!pKdUAqR EҰ!L[[#9oJprNQ!+R[ZuEA%BFy+PaF^{-|7Z:O˫ꑔDǕZæn'ˏWsU+CxVU#G?fG^޷'r[i"Y߾j&/ as1T#1݋?i.xAu+R\(wYE𖈒[hYs}#^3-zG R<}^s"V klU367{zݖ="MYm _p>Tnڼikkn*ёy>-kq>kA#v-0䵞nq 6g4l/Ϊ؛k>j BPI>[upf'Ft"0h+N>:8;mOlH~ؐ|$h-.14-co3cش8kD Ejյ)%p`+2$}.Fmأ9'(G AB.n:;G5gaOw}L|Z؈e备gmo@FXl|z.bFGiO{aJ4Om&v4<ṅA_u?]S] Y Jl\rL-"#1C-@ #:'R"ZN\9-`  W Xh1[ 6wRD+o+F\><x!&ѧehQgM gֿ.w9L ?k!<x@yvZYkkNupٹg [$ٸ)Z_bd zAx'vi|j>4_p9Gn}MMi?A/G?͖Wma᳝ݔ$-}Z(B1xwUl*ZףǢT/BX/{rŲ7L=ME>۫x/ ^\*ۅ>e5 g Ҹvy}n4a1Nny2ݙv P g %NvAbЄuB8EniJ;i4n18Y48vӵ [(MX'tS[@)ǝi^9 [ p )!{{n+ A# $,4%v݂a5n8Y4}rWk&n4a1Nn@z|mﮡv!΢AhP Nhtylu{6n18Y4(}nv!4:sdixWn8Y4dsw[iBA 2/^*֝gwv!΢A^0lŴDc낐q,%jK('QB:Lw+A,!BJ'+t(jz嗿| -?SlvwSO_j/@L'rgSĮN)`.I`e+}{P)R:Y8uOOOOOqdvR(晄9`*Xl8]ET[BYKCYp:dk-<ͼq|ޫ gچpɳ$k,g5+/`dr?W|fG_`h/xCfNbfd(^/9iT(I"^.j/̋&^ѻ!,rs9="E$ A'yE/ <h4)NNy*; 1 @[! 7$I*%Mn/^7os 6@LaJ)~)PvD=kH=k\/bft3T]|234go-ȃ5j,ᨈ"I27cMXۉ5$ewa^d2݂jAj3g>`0ρ"ZHW0q\0)\Lyw:I'G`Q?גRxO8 l[AJb*oSИXU\4/`{w~1bvъlCa칃(>TZ:5s7yAt Kz3sMDIL^6y;xyLhɫ1f @o>Of惍tei(_ݰD3pRQZ}M{14|iɬ؅Zٙ}RfdH$,J«(d~ y|dq8ot˨ [&Ӻоj""C|Wf}$>tĠGžS?u~nB"T)?B ?y_Q-_ק(i| SY&Ϥsű\gf(aL#? $5E=qbh*At(e򼾧؇&Sga4#8( XL(#غ`6" `Bo>܆//|||m:O 8lULR B  -kJ$*J2Ƃl=o#}B_Ef483?ϫ `#-mW q:f7Cw[\X_ם7]h~>/B/f>,"?0/]G>UR oFEEЌN7coD;$-yt-eVb\rqՉZ$?H/Gջ1 l&u[d~jk'$ߗA bgc?zjIj)g"XD"je[JVb*,XkXdKy[~:Mh4)ӄI[4U͵¯;VC׺k0tV;7 U.o>yQ@"#z4y*: ^'CTogkd6qhD B+A8e2-{k6hp҇) Վ6ZI0"ΐg8 CT)L1X{pFp>4 z>֝U5~7K[۹LrqDrsWVMmǗPS[@}}Z@~OXހ=釧@L{A!'Vf2 o<ЍZ]z[Jz8T6KTu?~<^s?y3Z[} {~1FTUvz6}p!n&.^ҭDx+r}*;^dvw2ӂ0||Dtti #;q8&]BW/}L8} >[>=8pva=L Zs9-;;Wop*rUĄ6?7`[Z1 O%pgNT5ް<}nf-w56C;&cXJ84KG>$EaDv4֤Yz>eզQ5#,p"qGOG19S;9{=3Rd8E& ZٟLWw'f-N颞C0޼|Q_?Y ZH*ڮQ5.MUyK^yχKxM:FAήRm f kmۇ3-8 mdwҫpje~|a[-Nۯu$҇>__N g'8ՙ)8ċ^R4-\ZBY 5VE㩍2cX[v|H[81>`mxT/= wULR\#2:NCw<ǔުGa%)踖Lz+%q|, 0!fl9~]犋Z=/f l>I0a A}0ԓ^d@lfT+eʉ ̙Yo| YY!BFIm;j߶5 u9$q71skgJa1PveF-ͰW9D\xJsO %2 ܨmI;B %F]nد7B֘i}]pq} llX1;G ܭjJ5%`@Ch >͸kGS2\SE$:8iNt7cXЛtr޵VTAvϛ/ saw,!yZu,[?oxޮ:JbI@[(dq7Tt$=Z$FQCzl瀆P-YCXQzԅvsby,OhU (/cWKm8\P(:PP^,BA&4'vL51]kqFTeB)cA2 6pr83I,PeeFQ6N(㰪COZCJ([Qw'.#[ʽR*xayKs%1&ӘgS(͈y} CC`0ČùD1`?R\QɈ0g86D{c>h ҵ[Q29w#a0h&^h)tQ&X;lnh8[cl!Ң5H=zmW PH?sx>Hlȯ#e-uᨃo6K= 󽁺Fp!CK[ .UZKƪRZ5YMH"W޼YEN&Q zb.[X<<^?ఠRjgiYj P e@ʼNfL$(Z"5ICx'4$,s q,uN z\kZ1Wa0X^h\нba3eb uoWۉn}1휑>RW+s^^Xp /mnӳ !sA'yE/ <؛Eq!ۦ !Jƒ|;#Cy;٣(KA+X\PLŞs-,LՇ dnRLgѽUq'd=༡ Kh ѬI\A?aLdH #`Q =9͂i*yw4ggH8Ĵ C|gHjGt ƥ 9F  \h,d=>mH?Բ|݁* yCb|(ꀁy/G55Lufaĭ*VT0CQNtN{--Jk`zpħf8HnOTp<> ڳs|:|8&h+Bia+YoNY +Xt!K0v֜B}ĵF 3S|ԏ m bJq+knHolu>pa<:IX$A,>Y l\jt7@J++̬L^m7^,uPZK/-~ ?<=Όp2-2ؖz&g1#ſ3f3$Y'SH+7w{g!NӫmOj +ͼ!y_1T?qYQ?{\ec+Hlb6hشa ؠ#Ȩ=KֶLi,BJ,lKwgƓ7{jW/?9xL=p8z/ ("*:-]Ī,P89ÓWE29O::ٹ M $+f+vՕB1Σ4+Ѳ'& HJ#Fƿ++j\g cXT NaU(w;Ulcc&e9Ȝtgvڵw~ j4X(&V{U5kD/Ki<@xy^[̢ m零ТWO"ŜX@ߟ=GQHء6QϢw:8<*\Q,k@Hb-- Z9cd-ȿ5u2ˏ~΀X\I!-g app>wF uBI1|.ǻJ0"X!<3ٓ~]EcT3FC%LOͧ:r27Btq2DG"uWN0,(5)Rr_/I?M"M١l0z+[+zO[`؄P9꒷sq(On/ bK0AIKRbDɁB''< Uq!g^E1`%9:E^0N((u2*D;TLQ@ng#;.\ޘ<]O1ݼoi!gm0w3cXYjqEXWv)컃JwSw?@t۝kNKsC;B*`Y'cvϹmMa 4ӸjZowAI*{J5M̬S5v2S!XW"U*/ t;2]:vCǒT  b5)ϪSҟ/tLJyiUXSsPM.?hWRd=oӴ`Q {k} SCR۾/fH {Yo_{x4j\Z|Ee(3<})]SDϓQx҈gUs#n|Z9ƨl9 jVXŢ(>~O rNMYKc'Oz~$.9_`85˿Stul,B(fX楉xIM.Pԋe$CЅ_ {?uF."# EFn<ഓd*$#=YY|Wm'](6 'eWN>]mbcvHBv}vvR3Jz̼ jv ]2Kf%jaDT&$H He2RrtsL`)Jxf vAb ǜf0ތ YXda 5ZXKbN Sf<Fif$q hi\7fpYU$ F!M쮋r^rڜ{9kSv>8%ݱoeɜ9lj6$CNnuV8Q=K&#D #ĦƷZӈB`h%?8ؚKcbF7;wer8A{#,Ӏ) oȔc8ziDtQxSRdCJ9tlAFiU]wtS#o\RFajs#yݍ]\z]ح{wZr'0\iV/ԫ>I]hjrE8'mlg%MU4 GcIq LjTɵZ0eEvYJ푓ʠ14I JfQZAadџ}Rzߧak܋|;ow8g>ecIx^ 1{(2(R0a :` r=F(xpH[RsfSѬ[=u1-j uN]q,ZJژ9l*!Y&)(F СϽ_~7v} 8 P" w DyPf,82[&PHӢr}fl6SBـ%JpZ57|Wa^ ,,-f!xͦ}y::O0Fi&:쑷)`b\4X Wmv{c.VSi cKgZ`D5Y=N"VGXq#INzb$lU\ڢ`grؼ^8yq.mm6h B(H$ldDLR`NDIHh;N}0Xo"uZ6ƩM<$*zczC*8帥+G0B* /**E/|?%r8a\y?=ҍ,(e ^Z;`aY8SetN/40gY)'r)'T;zM#_V(+gB֢c5JA4q,qQJ$9Qa : D08+j%l9kàF5N<(dgr!FjUL}`%Rҳ _[:"ȟg+aޘͮĖt&2JkUTH%^|5b-cݱw=L 4޴e=)SW٧13![x,' LCo fjTTpi|wܼ'敉,yYx{:vu~MiB^2͞\',ޮjC"Gq3 `]t͇Y 'Fe *o B7dRs˥;&黻Ӂ5Mu~|l J6bgB]Jw 1J{[G|$}y#|N_g^&,`]NLi'q뱺Wå@]SVل DeOӨ+8x,e?) 7m r#9zKb[<(R[!R)11LJ?$YϕJvN}Lh3:+0ʔI5D <2m8T& ŒdR(&f rfTB cCt z4!CƼKĖck)4Üm vr ?324k~f4BLeE R(^% M5I)C4+7|֋vGnxVPpBo˴?Pv\ bAVz.ԯ#˱Vd?e%*FlA(EPAbM$D$I);Q P=U*0£ߥt RWuJxtӛMŽg;7 py^w4k28ݥ|FdvߣK1ޟ] jrB1[]Q/+3>2aZnm񁲁O (MWG\۵vk<\ֆ|"DX<~v^/:vkʃi:FvD/qDO4W!!߹ȔVr\hben<<M.8};bA~_\-mrgMk#ԃ EuM`tBmp6Ԧ0DJRw[+p]5qƤ`k찲EZ/I7]Ykd;3zqvٌϯ|>y |>ifC#.(Uc;hYNkmjѭĭ\ zz3N)eHw*VX*Q{x󛊫cTXjCrT2oL#>+dKիz"ւ"<1dӇH9?^X,ɫ1?OXTJ1,x__zejPwz@D5U0;hƹ|;d2)4uHD _zPLb,<X @Q1%B9ۀ#(够Rm:ϔ-Sup Ce&@}JgWVF6ew'wv~C4R߮\)-?2+=dKἶFֲ"UDL](IIxJqckrE)gEJp;7)=:cK,q~z z.' i>(u7 EP&' IAoq?s<|>.^eK4~$ס)J1Y)yM0Ha -Au1TSTHp8SI-6J[ꑵSc 8cw,CSӁuhj:xt^N7GD5IC/A7Y@loa_feg[TWoP,6PUVI~ F/&{L$R'F㣑ںRC+cvMp2EŞwɔ6j1c[ +rE*VWax|HzYtຼTe 5; /fZ7y z̮5Y70={|/qzi򆚽sZq޹b/&{jxbhcD9ѫ^F#K2ѴdȒL4}dw#<<FjylOXxiḡG{>Du6*WO..@ tuǝxM zѻ 믲{nT٢DUW!Da[3fZƶ*~IeZSr]gSvSX~;E&KNq"fbA1cBAH6O(Ss 5<]z͂@Xq*I: 9)"4zK%]KZTZhK,?hVjF8+9 T_b1f =9i0F+NqդSs讧jk^ ΁W7w4=ByuD^64䃫:!ڴޚ4ޚ-lspƄ[^ ӂ@%]:8#0F_ݥ I[Y&iN#z,~5nn/gLqTd}24  Gڻ{ݏ8G(4wЎXGM8Y"٢n_2k($X]e KUX2pWhd6^=C)oIp ѣ;2[n$m/6U{ |w"}?I{%QacIņ֖8=Q5SM1ё=1EJ 'lH,aajG6-q 䒦\Kֲ1ӳ![i$j$j$juʻ w SyeC(+982 jq(Y銕w/"}vD'cxGFeSLa\B`HVL;[kZ:%W:Nt))*j"cY*N (JIP9J (Apk]E<+ l$gclRZZ2Z;ck]/U WZX>}lSϫ, jW?WjLM6`woYuO#H/m_挛83H1a;7t:%<󱃻GD^/߾gԹ`ZsAm`n38%`DŢH+d׭XҸߊ5#g\7Z{r dGTNh˛Wh\Uo܉'O#oη9ժ;Z^W#FwyI3 P]{ a"%}A 8a*_Ac! 1Po(O.gh*Q ~<]4U{8wyZp.['-J}8lp:v`:>'h8t8tфx$[iỄ7F_.j:$q =8AP˴io#8}x1 C~.h~!_IL}:K;R?5g9JV\__Ƭ=Ҷ1Qïpb^P\rJ_6xOF,]V6D5jͯן]|dm*3n?荒4voW?S0h^=DphycX1^O ~fSىѭ$"dL_t;m Y{+*W8g$/}tg_hRGゆbU!MjtǑ׎H [" >2LhaGM%RtWxԤ6R^`u gNaH鲢W$ay&ƎDǕ9}zA^5.ioR;t$&17I=W/y0#*/c,F%J19#ޔ>X=$8"T3T%r_U ڑnE-=&Mn?hvCz{e'\ODs=~*(0Q3G4^ <K̖& @42]s_߹/?Z)cdSzؔ=D.7d%]Ú0n%l,@k1H~2YAU a8P*JI/+gX$Tˊ|%Ĵ;|Z\-lbQ L-Qw{`M$y~QnҡrܓD8@Ũ4 E%mhM qѯ4a~V,r-(."peC^#CZv|'[%ONc vs"2dt9NP"2KTTõ0vs4*З;\Kɍ#j(Ԛt2D~;Z ݓ|D}Jћᨢo.R*l0piUwBZ&G!I"̱ͮhS(̱څTdCy[O.z<ĮMg|1Gc QnKwjEYp{W<__O]a<}u'W/W%|ޟ0>z Oٗ?FOЀ'lO]c_73pļ&]p翠OXj>ъǗ x +:_F+vD5ѼdP`La`6--/;K>ݠ$!uYD+pEJ)nH0A UgZOꟷ_7mz&W;L8@wcGKxV7`j^7KƌJ.on#ȦYJK+oJ7 D9sCAӴփlse?̖Ҡ\BzM>z] 4D4{3NsnщfI); v TC/w߲+k%;#V'nhFgWha]id2x uirM㻏Qd}\kg .Z<#zrWU%-P5(-WD΂,[]ZNnm⏈anK[7kb&&LWUo|zU i_k`'uUhNhֲB LI ,=ؔǁ&Tgr::'~nr=a&yǛS''ғI8]zYnbX9k۳]kL Cy5;%msvKӭe-uk"K( ˂kepwjoq4ŞwTg-XSNK(;e: MW,CpX֭ ike |V⼳W_wE1ր6NڀCsv!FJtp#tf+ycפd+ʊ^-^hlI5Ld9!iY}QIaR= lȨo*0ˢϥxVuD3]Ö3Н Kj|v eځCnk+R9n|f.<}^}w֜AeHLœ YUGWsuksKQPfR1Z^_}zg,ZT竛[ \2.ƭZ!]C_W#x/ p;xb{,zT0J#j$vw e4i3 𰭸(!PD7˰_;G )8YK)LǠ)2]ZQs(I;}#"/0N9tɮpcqNIto[if1]3;-)W46aKrqI Im첨myn?SFj QUv]t &eNil-ݒm*OS . Lw/O.!X*"%g Xl_X]5Oh\Jr_16hӬ8#KQ S1TA̤"Nqf@$|3Sb0;\,7QP5L" 3TnS/>sBOsc@' 6#K ]p,Y̸U **Qʨ*-S/JX{C4weIzY,2a^f?Dբ&>f1}#H,-ʈ/"2KwA{]F [DNƝ<4r7r6b#j99Fۋǔ9mZ pA0*L}, = /?ae^Vrv6G."8owx#"6 KCιAzƵg*zGad` .0Q(Q&Np(-ۮ(BGdw6$yd>+$#B Aۄ7xg}@PV v9&"jc.)Sgas-pg1`] Kvެ[x!l FdN j` h W5Z2J,L"(Kta=Lo֭ˌ(diETb#7)46`ɁH5\V>ÿX4a\Q䞻DT3+Y"Yq*Q)5P iBa11篤LC 1Z2UZYpzxƀpL$K3| bd9:LQpMQIRF|(2VE=sLK0]j MJ$ߓgVFd!ˮ#YA`Y+L? PV'k':+>,YdjKYH /Wkr^UZHVUPzB>Lo/{@aZ$p Yb:{PfC rJl>J w mnZl JX!EqD &#$3m[홒s.gPbcjG"45?>|hoG[N'iQ׳?bzVx\.'\VQǓ=9)0I ѓH.DQ6*W0#! VOCL٢ yḩէϧDz cݍBqkY,)`u%*$T X'cu۶l4M.ryEXoxKËa~e~U`A0َFEx8+9ԀۃE=_jHY[>rݡְKa\Jd,>CHЌtL%fQ|ֺ502o:ϪRT XW>[~}F%˂4J}ڱJZ;V>D <$&$L`cQ@- >q\Gccubov;Z-S}\y^ sa3SOCԃFkضbn iFF #a_gH1d2k0jkk\(tF*'EuQq!XY͆ fߌ!5݀\#+9ӌҿw:XY/WC|=9)P]BP:_^TSu^?utA|ah8/ ۄ8znu0 ţ˫jaAb1}wr܌w֠=~?~w&OV?Ԟ 5bg5Z{u*wd" ,?-ɻxcz?e@<1h5k8^0 Yd +D]T)5BԵ=f?!q.y6E9|ܣAr"k3Q˜[J K~ A> D''-cxc|\fЍZ~3Ao?3L.tt-H Qۉi(l9bx8>>^쥥XzA)Mb+eYI> 58CwRyLO΄Px,7`,Ҧ'ֳ/ڋ@Tb_QXt_'̈́2fxч}t#9k/{觳on _a^]'{!04Hې􍘵z/1D︲eDx]r7Zu ǕU#q~u3ܢ硐/;3Z}&>T/N[#]KޛAת7{u5cH vrJ7NF_N>w4~+Q&m~kWCR׋8^{HY6q*;W/n?Sѭ߭۟i A?t#1bs!8jJx~)? 2u\yǜu|zuv!~sF*}Cd~%瓏dav̗`Wև}0[//K>FsURZkd ,?0M'zШmGn%丘qХ>Ue-z*v{+k(tD}:|:=>ޒWA$Q9RƨhՆwG\67%h#Hp]^O3.E.x)fe t۠klnAËۈ{9lxbyh5yVk}LZpk6nFLL ARV{i7|Y=Lh(H,J12>\YM[8<돳#4&ݧs#;K޶ˀb/$ qУ`:qП0Y!o (V.نS聯vڔ$J:mF2:m ;A9@(б0@ϋ` o~~~~f(,_h-1tmTN{ՎsI:'$H/}ڢ3( J D_NQ؊'9x1r%gaJ2eҀ"poD>J,(2z#{;CZxOKL.uP{B !e0LV%攰ւ` Gne5۵ip-׳.<_ [fViaH0fVnJ̇:\N#B+mi3+$T?.VAU5cJA*8; e Y䚷m4Bs!)۷!Ӛ.pjAE8d)$Ze3y0J>jn!z9/^]]BdY!/W "z Ǚw=ڙE/HJGk8n8Hc9$"CHX.ia9mwu1Y|WOgFr6|65}󼰵ESvred1SJdk*$P!b]6ǑjϽ)mbW#T',nh;X<9ggqȁ286 ʮşŊajU+PU {J*{WmwS˼?ا8sK09?_b^Nc- Q)0۹D&cLUYƔb^s^kÅ.\ 2< 6 ȍDo`n0w+llSĻnMiުxuH>5!a G,JY-rA@91Ql,Pj+*ʉ@-}B;g=;ƛ䥓*kyҨbTc[ yR k#& $7 \ׅ¡IP6d~9խ<_iZ tg8ߵ|{d>Z6/qny<]\[֏3ǙqV\=BmIrrueO H4׳/4=K/#RTb6~S 1&J(1I"uІyɂ(Z::1O`8Xgƶ,LZsFҢgEŃZ\h75fɋk8 s[?x|;6fad4AњrkJ5޳䣠zM% E!Q&jhlbzpi2 Hi@&( .yaW#&8yߢU# UI&36Bo^#_FAR߰He6i S"w狜ŝ4i u̹:kv8o1fZ|wG|q]Bfc/ӫ{XLߝ7#F%:94'B.hZmOw!㴮heemc={hiI^ZO)L[E+T潎 Ehc% sH*>f?!q.и]]\Su+ IŜuҘἿl2]P01ns~6.9R0s2͘R@g襖Q޵Fv"K@b4'{v1u@uFFRZ$ %5D6ϥuen*d+/-AkF0,٢o~{+;'j,9ߟzv~ò7\|L!ݩf):[nަ6'ɗfvX;N]ZHƱߦW !I'ṱ׵RN4e q\ mglݨZN-4JK( !Lz{ ^;-B򱤋Q3@t;ZSΤŬ91A`r_@g2Di@MO#kUܛ2maEt稿$?o3)(Y F&Y 1ԸhɄҒpz-CſL[rsA)rFLcB:&*c$WΥPjiHKߗJ;%vyf4[_H41!x6}ʜ 2_/gH!B`ܳ.lNm0œQ井KI)ZBRyF >$EveҁH>-a_Y} cW2 7=!t/t_RQ;Fy Lp3kY<-RО6UYPYbNOud׼Ȯ ~HR+ON`Aq(LA1CTh=A/As(GwM0*q"VV,KԨXB X$p杶p 鍑q̏51hFBPqx.Z dɡy< @9*})[ZOLWJͶԜ+A25([HW&?'ɸ'g-,ՙۯ%Ӻ&v;QflI0gGYb{P.{&8W{8>:T&>bԩH8.j.O]f<  X"+%Ha5Ac]HYLPddӨgcBtkCMӭoXGa69Jt.xMuCpFYTJyZ"qaL\t[i")V WAl y>Y t2[R=EE[G| W7Ud䱈"0!)J#e亵pBe1IAXkrbO,6y1e*uAod˘U+GH/V dI9PPb &Zg:C *H%{- 4¥[|ff4vR7>oPk4*Y(:qMEŤr>@ HB\I,,i4+.p0:>L?dɺu;CM);A湒,jF@]JKt@"?Yk֥ kK`Գo$b|JJZz 볉B:zAD2XQr*.>Q޲*~Et0h'*ry03jX!vm(5_/vot6 n{gtlLµbn??X}G7?Y?_dV&Ws]FPWZTۃ}u(,"$-w: lt1jPVf0us1{-2P!xѧRWthӪ*NC񫳻]8>W[HB^.\j͇̅4yW߃ IÁFy^#g=Yrm~[/~b1>{ b˂Ћ+J%누'syB'Kv鳻tnӌWk Wج=cvX< M~ ˯*xS~sg>a~?FNغL_+w)~MGŞ,䅛hMäw([+%FwpAtY ݚ-n}X 7&_my7QxV JL6[IbfAK[M`JEJu{j52¿ӟ:P _\‹q2(5xYK :W;]m]/)sw)'y2?|f̾mmEooOdJNzc/*y,MkL[ #5~w[@(zSx i߬6fr/91ӠA:7Pë`ɟ|O?\_ȍܣ Cf5qI ϥU՝~&*8!\rRSq"Գ qiRa蜱\kITF9aZ.\*σ"@ױ )3%sX3Moޓ֓ܽgBo/ RNV .% "ڣsyX#9c#94naIsgJmI+I:֡|ztVV]==wwc(=2oٸVsNVE*c0x`S>f"8{8j=(8BM k1( JX!lt)ak!j*͖^KDB).A4K3#ZH}ZT=,+|:@^}5tA5ψ˰ R é0[t9=pb`dHCzJmOnO]#_d:Zmr@g~}j7Utl˯WWXEƫ⻊)r W<:|u£~i=߾1%fG38SDK50gH/`KS=(:-k4SYiݹBVb›FD.0Q[+V0|$Y)|rQs#JbðڭN5UiE>ߊoM88oWӎRt0&,W b^:8$0ABPx- 3DiO67(Ҫʡ I6I &6Ʉ BNYT󥽣 M^ʨnF| 3[)sSZ4e1;t2K"D 3<2"Eh@3UA%SkF|ԠgԜ^հfiJhHfQ*aPִAwpt5\ rJl)zxm=1T)IP9s]OF)ƎleW Z:5~:Qâ:^:p$m8T|N{(Ԩsi;n58فۨ8C{rqyZ]_&Mt-QħVҪ"boA͆ۙ;.ܽL  ?{~u kd!Pv6{Q;vӡE(Q4^;Tt~ކS_mg [W0[R5\ V{e@<?_y#uzN>?(+DQ0&e)lO_WZЃGzx $-HƯZGR&rِ͟ ;XR?INt4I*f߻y 9Y%Zn~}~ǙHR񫉟f?&~Mk/pկ1.2-#)dJPZx xTΊU!B\KNǸig`f)f[ZX2 ?bS#weHkxegkt/i!$ɤ-[l'R]v:>l q0"`6C1(yc>lq%7K^>YrS[i[!~ ȍր{%  4X8/-(B9ignx<@4xx?ZZh3/5>!MSj_?΀>ܟ~ë#Eҗ>xOE#H>\ȁ+ z>| UnGW!<_D;!/#\ި .lg"/Vur[_kyST,.vz.o''LJ喹IE^I.;ZQ,w7:4/eeA K€:pIժ0e*t|yGz&ԈEo4;\&n$xVQ^L-ɻ_%@2TA-^g*V__;F%b ,b&$Ab[ph ȔЙp>4-J(~ks@)p@W9.wg+2:ԍ|nPV @Ƚ@ #P!l܊YVNa:ܝWzDWFp.JWh\Q'8Fc}&/(Az _oBz)iӡ) &(cw(3KW R敹3nS˯kJ,+%+;[I(@%6 k{۟^h+6\@'5cjP1Ǝ d>w[:Br-#wst5JZ|E$id 4E1+ qJ3ze#d:^VCVJe_^ZSBm<4d'uyNIpE i0'_H_\òg\/.R i]p7dn@-gj2+zޟ&3haK /&}>r2AjZdɇq) Ff{™)DCɊ"WA~L3^s0dd no.$/Zi$#I5zuMbLUj]7x$wDxo)t +X'WhW0FJ_XOr1~UXneSz8bU3],;hpa"9wwƀKLB^T@*/X&QFd.0e }iJZ.'"i\4Y]4ϽN6C A3<\J8B# @wwpi\Hp ,B ғJvt^&!o6KeKJZȳNAR@%wog$#e i&gڗ i쀷oO:?(X%Pŷ+:DN֨?Z 3.~ӯ_ #XRwC W Wç.U7wwK{!'ֶ iL/Jf5S~j+y/[aK;mߡׇYVQE}iruhLrj .rpW\Qًm>&4^U96k~Flm&@&i#eZp"rd\!Dw{6.Z@ڐ& *KF 4wѓK\@Y O$2|R>Z~JVL1 nXn0*zqQ&X+pQXW F!-cdunX{LCVH16M:H~q'p?b(Hy^}fpV ż0lg_]QbSFX3zC%ѲpAcG`k>s蠸p(3$ںsP C݁qZp جuxj]q4￯tJ(&7+{"r ^3 xp 4'ޜU mZIkwRHr+$`5d8ήC)]*mH*Mj0Ȕf>1mL`MkI F61"we.4R<ơ`Ttj !2 ɷ7Th`48EN@3׼ [im=h1P[BWߢj~C1 3它2G|"K)t%* [c(=෠>֐[v<<1Ds9nL>>xd)SB '?|>r)0'N:C2M,a/1PTI[b+Yj(a캙ΎgP<;a0År2NB!f; 7ʒ;IGs2L)Ȝ)  G*BUTdXHWUS}TCV }}VMvN1FKKeVwK Q5deҐGOn )†(Tu 4$*UY01Ǝqcc"FQ<=T @64I`+fU_G0Ǝ*S[1~5\k_} N'RS$˟_l 8OuyߤChɀ]8X87 GK"pn{U8Z~q(?D/lW8:uHY"rn"Y*o.ArkhL%mRPAMfdf33Qi6=fs(Uh+)TM MCE.JУEs~GPh Ȋv+)܌Y<1%NQbyrkIQK )-O\] -n=4-ঐQr*2ܭt(k|(1"*R3_hÝ1ƎqFtItМnW4VebR,tK͝Wg+ yfS|e%pqt9y;a22M(goSMPY@jҡ*]FhE هPR3rky|OPibĂvDԔr 篣c<<;b Fh TWHeJZ<anW nͥ0byc%`TCހJ@pϓAN襵j L,ЅXt9E6cwa8^[nV!I& |iٷ} u}MXpZoFI[*kv?bT){8js9jƍ0@ME*0Q+Y1k 1vmuA![TK7Mwxnz^mPF@BjĸܹH`xCco9N-˚A=Xw~k1O`. Tӽo/Ss* wrnq ~esZomȝ8^'FXJ˵m<4$Ae.x6~˥?NBfLөnZCF5M3re7G-hY}Gދ1*Mn/:I1 !KH75U@+V;2 "Oq]u8\lPX&$ w$#&Y. /ue5l\'7Q~  n"ɓPS6(C1.)1MaQJ RIb6Vqj׵<\koQggu;|~I/EG2}%,[:wӮ2Y̡k<;cU# VMqkW^W_v]"V%b[[кU }BB򯊨DHD3%!dndq,N^KAnusݣUnᫎ+<W7*|ϏHFL~:4϶fV_??7m͠ ׏?eJka*敤a^E\3<W1*T3 hT|5UFh_Sx5AۖO_Y\lY.Y@f]UW_63/ìގtތ9:[,gZ9h va25@49|?~~-vɃfVOc7Ziڞ2LH&=SSn|ޮ 9H^خ,Z|ueJieM4+91MvG"$G;1F Bg)C\u1R& }n"4k%/"-U8xϡa g*$Y/jeMZZ1x] !G}L逻EJ$>Z5*O@zp" t|E|*r޴7"MyUd[č`.ʪU2몪渕\(+oR547,+kjÍEB ൔeX1/Ap[<OJ9\&>QJ"K60q$LkV^ Z}Kr|dGI%E8%`<%18a2x<*%AIt˺ ՚܁H%r>z9zw[eIP=YZ &fꞭ^Qr Q<^ՐxU- R2 aeZp5ւN%ҵ#*4z:"﵍~㯿ށ@l{n4$'Lrn^vlM\0Jӵ^뻝ҭgiC+mm^ojn[š|$LVj}eO~sM!J'F4]o6Zv^H`D={?[͍c9o x9C|_?mtO~}ͽ/u~ci/?kmQvtO+x+ó lF>kGۻW8:\`K.Uޟ---IErhyǶQ䯆"f/(ʟ/3%l9\4N{gqߧ(:!G;\q}[L3 ym hhen݆fp|I{v;E;ã>sRL> i;;|lL@̤̓6ó{iO|ʧ~g/@FpTmgCxڋ^줸*/G'p]j8qrQ- 2c?}]BO#~vJ&p$՛1KؕXq@g'e1?P-Bm;G$uW ?R2+'{?+ . " ;O/$x-Wj6Њ1Jᚵ `VoV Ẕ7CfuL(dYooVvmz~[*X!YC`Ĩ\!gꊖ} kffIQ^q-SK8r&YN OrV}B݆qlxӘ4'Th뚱+<%."":0ý0,6 Yc< yu+Pl1 tmdKt!?k@btpK`:U/84KO`.O@LyvOIp.p@@&!5C0po? 1~ϠP(F`{_쾈G6|ny0$c8T?1+\ k$+晓WG[o/`;c/sc7X;=;<L1 A3v;6N2^~7+L9Kwķ<^UC'_zgsVhIy~|ЍQy:X2_ 4Qt4SIEcAI)3{YaTv&GEсl45xaZ/뜥s??Ƭg'Tp`_TlYi؊۔4#D "kg4:)ahrf[."0% y-Lg.Q6TϟxuؚBD*?Bl4Rw3Y3C#|1qP) rRxfZx{2w0{TdyrA`,R$2B[m%Aw6>(f9kmyKDT;Zgp ޒ:̥/F|Ihdoeși`Y,X"+@޷B+d+M{* uԂ ɕHWɧa\oؠJVd\WR2 >dEQt{I.IJ` 1Sh"xФy s`HlCU(Ejih?-* %-ʍ[>i VxpQ:Pp~:yC31ӛKNx70MSM)V(<|jf|GBa+ _Ԝ&- .%}lJ ULE*c)je%rJ!C55?~V@] )ie"J1+y^E%%A[8L7{w~?ƅW9=?[KNנ/HZ 28n鲟.lg<8F.:Ч+.Uq睧J9lRɅ$C y\I%V;l,UAj(o[$* (lq$h Dn<̂: `0"QW3OdF*I/3YZhgEei$a2]Ԡr[O͂Y}^)DcVKpd3'.b=-uZ08%*5 )qj6Q^SBzC0A ΓO.+R=K`$rãf$'' ijјw'yʐFT 0bt@69pc/h٫玺4HQQ-jxgkQ׾h;ӦqRLBM4CrD,&^-O)Pk|+p~~Kb`V]nCS!T1(Y3UemgNKGN#f(/DBEkN dEDT˂ycEt(ȅ[Q ;f5@-B'lxe՗*C{<KLg:΄ ,6NNI ")5\ Ζe$ڹn0t1ṿj l g#}nAH†Z.v7?୥Ϯ3H%3YB[kRivZ~& R8!|*a#7,A"b.k؁n 2.i};eEVL(L'<ʡ ^!;cx& x1<ـ0؎CBLj B?5`n(D%\*\b Ydm9*oE%h&qiaA*5#^Fİ[W༗FN\쎸)2PJŗrD@SB$mEAs:a)xP*#ʴ<*Y>m2Fbww>/bwodcw?PK RkA^RXq7(]*',2(q!  JUݦ4|=ʨJ-a=쭵N|nb 4l y3qdɅŘnnQq# 3ﱅ 9\.(v#9m%䜲Jg% #|09ؒF_xYL&2P10o]4hM&t2Ǥc$vag\:p .1cF[/*2y .PD1IˣA9.-.w3z('%<7(G6ء,ylܗY!& 7ו[*~PWjKWZJt6d=s"쒌.>nnQ_e\Y:S&RE>!N3Yp^)q;`f*FTZHW"&.yUquwB5!ڹσ󧕜:8B\ |B!{Z/(UWٚ{4; pRH17(Is ~d;L7JԓLޫn)Ln|pͥ(9 dQc< ӓ dwŠiN%Ɉ@9[^Ot]ܩ&-Bsdy`5ImG6$[1]&ȼs*$d DJBqR玻EC5zBRLaθFyRDjw͔q!N{(\~(ˀ f \KG,;l5wZ^OriHhT*17ލJ*,Jɝ Q)a [q2`JG@pP-8suxT:h.=dވ@agڌ#7KǍ P*-p%)іNI B|l`~=xₖa&NYAD Rə˔ڌ޼X 2SflziǾf{F?4޹HCʸ\Op8.tq1 fT}.Mo6_@E>& 'I~l@m.&NZgFp,7SX8d% ;HӘ#2 fS}8exj`֭vࡘ* k#5:{Ҷ;B%Z%cQݵ#bLAI2dlks l7+Q3լy˝ Ӱj ;R=mN[q!(jNFzy:FsiC󰽐4bqfF nn@NK֧8(Qc^d(E*^]7UنBhLo^븍^-fV)ѧAmRdyR+-%DjadLRPP631Vv҉l$eýq%%+LՓcwJŎ 2=]ZZggx@^y 1g]؃_X6j6׼l[/`=X`Lu,jE]?)gX) }Fڡ t);^'C&QNnEUPG5>Blu[G` Ttc:|ʊ!{vcm,޾kWgAY +ضyOx@HC6n3\in,WDD]t>b}-$!>]uU9z1[Խu?}@}$j<)@qgjWο/-ga٪}""H;>N%2lͮ 7Zp*qXYJG#Mc~2!j Q7}~{vu eO=IߏɐP6ҽ7Oi^gNX ?;<]ǪBv~޲=nH˳NiFhiM):6!Z)] JxZ5R\MZ||KzEk g1A/B^hm4Xy>Y, OHZnL~Aq} ܳ+Q"(:Djh硇 47g&P`|̍ĊuSu#Bl콄Tf=z{9Jݩח\Хw#=}yc6N.d;k& iaN5Yc LײkWMBhT32)X:8qn౮^"j4~^B%mkw5n @ KCˎGwdZ6$,ꌲ:Ea"jMnanlL]-I*DR-Bu&I&X'c˝.z4k1ð܁겞O56QBwFmPtK:Zt]vLR.n%P6 BB{IHk9SgBY9 <7V|=aRx͡2\סQO4^'5jp^JqCxcYVzzER?ƘZ5Yhc塚^+$B,CPϞϡض !v̢"'@,(sCl]b!nqwd:d5uZ9 )252X-Ԡdՠ|J0y̸?dr0][u}}ݾV[+naf?5, xBN^+5 ')?q8=8M<.mz޼+L>![ 'c:UnjblUo-V<[_.D~nYMv.j?_G'/$c?wsɿ^.e<]x;^i}.\ꬫW6~d'y v8zutˀM<IJDӯz w!%ۚ*vmU`(|5]wQ{u]fߴWčdeaoqq,7/^~0E|/?Tw5@ jOzHHOغ=Arhs'f>L./;3vQLxqXOnFhl=}I-t($APR܂ԩZG}po}uv3Ы6]!__Mڋew5MY hަWvR1hd RuK߲[ҷ얾tK7{Db|gyТ.6Y`&51uQhJצKe&R-| ,2][s㶒+yH^6&z6;μqS`^Qh^˯BkIj()32> 鮑s֡[r$ErKZDX"L/63! !)P2 AH|(7Cpqng=,*1yM 3cH$ _͂jZ3(| }]5Y^ }<.}6HK::xƒ mE\N$#5Ĕh^DҜLJ}Dd'诖_vi_F>GG̵"IseL$`!*tURx6ϾZ0 R:|C^/Ɋ1Nx9 ӳGa $0ɡQ/N43B7bYr0}nƴ^`<3!M4V\hY Cܭ ˗6 Vi·@Q)kHn /U[9gSͿQ`d{=.Z"Skm$(5/ ֪B@l5S[-4k )X8a\{qG:B"Đk柖}D xZjK(')wzz8ug*T{7(Qk=@""<9h%'ݚ-&Iuo~[SJN<+N?Hl0\~fHCDt9[&j68r^"dO~@` 䑵l <$Ipyl~ӕǏp{ )z (e|pEC" O"_^?]~^|VԸ']D\%cߑD&j-1tRfc\OuUx šOT동"*K9nş\pv{=_ww]?1h9A'( *udzS7a,p=_йnyz3smG-UT͡tv0瀵=þcAk=ܙB  WHe. ^ `&x2g7ywNi dw^y%,u@K)>]r\N{cioL9))Ӗ/9Y3*p"H Cs4s SbciDB y.2Ŵj{AӁ]rtWbB%MYߥߦڦKS莺ATե)xGqn$h 4Sڠ,8IcL0Q"@pũƐQd: У .O V4RBc{t v9ΫYҞ+%1߫hpt]lM&Kw35FbJk9t_=q y-Q%gL3= !b*$PG^qPޞ`em&r*J+7+-}<*`ϻۼ1ۼ)6[Si ׾L_'')EHBcbs c P*NA,@Me1,<'іX>gEJGcsgYܝu a\x[.9fC@Uc C44{V.<НAYpӕPЧ>ܥmNF㜊5=-SVuU*Iԏj0g#^1Ac1E/lce~FL0EXB֛*k%zc.uԆ"K\Hq8RP%ZJM${nZ֦i ekӞ'܍p7fݘ wSpڴ",QI!%$(T&,Huc HNPr'ĺ6@jӚ Uwֱi uÄ"U@fAU  ޺(^^Ejo=y$&uk7 N\?ɶvK~w7YR쨆': @G hPCXE}vBHY_M@$Z9a|9*NDiWhΎ40K)@vZ?}`ǑB&A$dδR\c9{͊sOnV.:@@Y/F3KӬcX/W>ez;- {l H}NWcXPޏzjV99 A]'׳"2haydE].CgM&N=}P\`[>iz'wAgX/#􃶯l~x:UL\;tU#B;gqw֗#qtQ+Z"'ZP.=l*PB:D=J m'WqL5ސs~URTFm:HWDH^ 觐ǒ<ą䬾36m14%YVh0%4yݘ9vc؍c79V A¡PI'U2ws_rǶ O 79樠QcP8U,%h6veݏzQ RR'Y;wvɦVo [BP6 S첊e:BTKql;*Y?;4Hu[%1WEF*f(HcĭRiV݁_ 㒻/3?X Qx(.%W20vODg.``Z4 dýgFj=T1Avu̖V橃]Ip!!tܽ1(wk?!6r*'Xb2!v\!((ŤߜB$.W [̖iosM6=lyi5\pnV?['ra,lgH2){:]Nb}Ywu4jg?׫j^r V*^)DهcR1I,ӔC'! %R9ljL\%"W~,h?VŇOz~>m͘~e&7ţ~Woҿ#fg_#az ~Χ~oÇɷoL"ۿ- 0|f_|j>8B02OsPFmV ףmcY+pCn'Ia3ԍ- YRJڔ\H.7\c1]wG_ ƽ0F*ֲA,kvEvҬuv;x5=l*k638;i=]0D`g0CGbW,޼IWlTI?]4u2:S^0>Xe KU;'$h PMEz˧^bpZʉioziy׿A_ތz,~ DuxZ3.v:pBPyA`0XdS:_u<==|?h DR^â8 G~y\mt ΆP.?`21 ; ˉ3zWL!0{[)RMp>ɸǬ]=_Ni#SQ:^'NOq6dVLF}Ƹd hnruJmm0_}pKoO%^䳡'}[ՊgƗ}eƟz!M)~@> E<5_M>lUmhQHa%L>83.9ِ翏|i}VrCBɃ^d+%gs5=i8LLJIO0ɰ{pURt1 > @Ս)b8!pUDP( kb2Hg EҠQlyX]# $Ӿ ԗѲ,8_Zy 57 iSBG4GL1pȹCѨIB((ml(K|nZXl`8̵||fƿ0p4 LKu<\I_R|.p ?02Q HEG [N b7O^y p&8/+[xy =@aE..A]Ӌq6nZ Qx2ޓ[+ a?;ū'|O~1@[ !s.Tyq: iO\0s|2>)(ob>kZ<ՔNaSVYB:ma䣲ԏ| mGkPW$~R\+A9u@ PμԚ skÕߞ|{rPЦ-EBYvE e WA09TiSqCJ?=%S⍔Dvߖ2zuJ%^C&=Y<*΄eOKBpD2Ϙ͘E", /+D:}gMcN'N9u\:l@k7B!>oN&glOR$uݐ]%8) a2I̢c #aPL[_=c{5W[G LWvا$DŽSWծRW)E.m,RM)Wf9. ." %̝yV g8cPI@PS w}jg{NWHَ %?VaeD BPƐљeeghD0XW\:lUlU(J~AܞUU$`}B83E%l*xZm5Uw?WEZ^u]{QjO)FbuQu߻{}뮻bjYcuwLP38|BL9k21H'}o%ߒo-w:gKӫ=$>|$.fs@P~fzG5j/mHWh{~nu5w1RfLs\nikw@*w[|e s#??nbWy8vul]V#;@P%P밲+ֹj AX itq˪:>*glnͭq5Yk(Dk-!ysaTI|7 L1zwZϟi).OnhIe)V=!HR`LH G!)%HR]4{ 'Iӻ6R[_·f\'9*[(eHޝDoOjK,}r}3^T(,] lO=&wGSxP!22fJz;s!#pcpv,iwau1M[Zy&EYC^o?}nK۠Edu|Jl, u8&jp/Eiϣ4 *V,o44[ Z"'GO =)bFwƧ afx"(d 0OwNF1#N`)5QoB< %p" J(@|j 24` рv֯?2.-bk+u>zW,xl1PQX I$o!9H?loO!EN,VDȘ֥Pk S*\BD ة cj()m}hF ̛҈lѼӞe+gYX[_/'5#MgPo4fTNM룗a?i^VVgԲn$'Xi2ό$KRH %T2 W+0|ۊ&R <5^+LIQ=CSǸXWx[c6ef0K0K0K0: G3[ue&èojNxyh!'z5D6`&vTG)D"eJG1l$A\̠j,=,x+ugri \ifQIF3\ j8Mb{QC^`t4 2BT>:1Mkl(-s=h`P-C(g"K͝j mj,XoA 31qifle(OY3EHvf#7 c^fmZ5>$3R$LX˦Q&bg~3zІLeiϏ^}iת]fA)6ѿӚɉ̆h^1 )@w6 ㆰEaen2QW(k6QvcC+M<) |`R|y;in[n~؞1`~\+kq0}vA9b˝ujhyIjg3B9gC|#:UL4:CȝIِ dc Xxah2ϼˆ9ap"X`ӈ] oC\ݰUecSqbkea1pXW 1d l]r!\[dp FC9%8V`xf*b&b#.V\ M,O{8%hixՊq,*H4g—CQٚƶ#ARVDgjuqu )pB˚51yB:PhNjYsP `Gs"JI,ڱADanB^Å(wq/͹\҇2/kNch&Xw\ ؀!7VbmԥسH+a>LiG=8oȥ}HFFY $PPqhK!8"bs s2Dň$&}36E9#K N=ۭa{4 *w5;lmU0.!LjFBEᨲ 0r0S0@ Z]+rgqYFEɠRec! Җhp NQy5cBu0j\]޾XirذR97+O-c04X2QSʣey!&kXb[MB]"t1ӭox4x(()KY*VJuL@[̝˓D濌;NKy:QM ޽>yyey#r|Vu``HJk{gi!eDDH6&H~~ j4LEne=E܍ݭ`pGe[f|k9aVdY=-3wϳ {?=hJN`)Y!'CߕfIyom\ˣ8k1*~{?;Xy ƀ6oek<=?P=]-`Sye<3mw4m%ğUms8icghA+1ҩ"TZrr@kQő\é1 '1jd! $Ӫq'}zp+HVeQ 'gyY}O9sog~Jv엍Kީ-0_wͳ!NdRIۮ~#LWی0gtX]C.>m $+3zlv,'ލPCdڗJ_J*,CʞFf#͵2L4I g={ iŕٷު5{Z\KO_D/a<{QϖpRaպB<=`wڇsMϟقʛ->yOQq}=ȣT׃luߗ]+ Cȉ)i vjGГdO)})*1'mb*}߯t0qS\#&2aM }/ L%ZàEt4sfQG/޴{|Z?~N0mcL ͧUj#R>JiyZ}eob;2*xoo۬bo2*\EWpU+X]&wh#!KMɥ ZG%@t0!РV%ՅN\C>n5m=Lngލ>gsFp4Zn6Ǣhq7E'_Zz.8BHSŔJib1Z W#6(. V y @+춧pj:^!/ T7,{KT y:hi+8 _4-{@H=D sB1xL*,99 \?i $mPh8䂢Zmo~΄=~cGEδc! +.p),.U@Ms 4&9I`= lt cɦIQcqi opLlyb{_?xYa^}Rh6( ZFΘ@NXIN(K h E(PN_OoO4@%AV7T{$:q PMbY8TѡYMt+j ͨ^$շAOui'!Y$f]x.@s"0[\5NRvi.]UN}(rc*@-EJ}|P\݅P;rJ+" <?Ii#uv8s&u_[]hK XoiQ`Q X0 P&&+xmE#T\9h]g^4ӲrMmH=o6@9(1gD#:nz_(/4Xͽ-$P}Cb.$Vt.':%%#g3c90qNbh 2?×6CVJ%ç_rq8pkJEN:%3x .i(.cZ|Zkצ.Hn1ugXQs  3oYpԔխwYM~F"2􏇋@&r4˹Ɖo-׺BԪِkݧ2iPM㛖z?fb1s,ho^"LL}519?{qsCk ~xnU_/#^|T=z/t׺N+H1Mxųj '. ed%QR8b`{H v@J 3HJ)F!BMΨS%K"ڿHL"Wnl蓏o5x[[ RL=x#,San4ջua!D)PB: x2e|8-㠠.PUh:USwу,|bX?hPvkEa!w|YI~ur٭upuULc9Tu#Ƽ9FwߪhYȓ( =Wjfb:![Lw~Pȕ+Za*㽍~sEYF~Ĝc1rhЇTz6|~}py.roe$2֝_<uuܤqvr/z|fR]m|$ { _ç> ^AxS\H (B. *Wg ZZ#_uTxR.@`jx[`ڀ# 7~LS>J *z$t۰h댸' `bȉl+~2_^!!bgW]8}M g*R?mHƫEF_Zv.GjS>԰KYZA]ކ^j ˧SGNdT]"'^:v\_`=KZ@y)7vI9hv6y[c|GkN=Tay)v6wK"侁 ;: ^QBpQ8n߶5<쇋Q&&1Hqڅɉc^2M +(( Xc!RP+/URpI<=75òXnSؽ׻FJK*zʤ}s %y`M<jBjoSj7fC:'Bu xЦ3gny^VFe:/3X9r7/k+9\=p "N#&Ovz\'ih=_Qg M\C/a #{ -I% >1SՒtQ-~: mPJ%("-|ˠ-^ ub1TQ.lF[,mM8”·%N|jI[c%EF.)zYIR6$r;IS4̃͊\[\^A,L¤P^,N|}f^[I-$ yQ\[^[ު6qq\pKx%5Z%}X \sP^/bA8hIZekw'۫5WqFg{}N dـƵƟxM`S$GTi74fttK6&lY4j\5sb%{,[g s01r"#z>_&Һ~^6z.j2?ޝ9j[o[7{Q;nCӁ9ㅦ;%:ekn^7g?ɥxlߖ1}ik@Jh\BD1KyD#8p!)=|w돣= 626hc;)+cЙ|ܒ֋Lp d4H Mhb?~8;}yG N9:Gڝ؃ ݞ75]N[WtxAK A$ O/ۃ88&p]Wo^_χ .,Qp)K%W.߀Y֒pr-͝}_ז']m~;JGmk814H[7ớ m?ONH68;ŌJ2O&ˋ4ar+kYgGP\ ؤb`JȕJl Xh6$)CЕ}b } x6,̆i;gZwf辿qf>-$wVP|?ЂQ_@xm~/,JZn >h.nB_ZqD-m tcC,Yӽ"Ӷ | x(xێMxvL.' .2gd 'zJvi7/=`Eˠ`ŷpU `Io={%puo~0@O޶7NUoϓ_a~!^z^~rsn4Wgr zL?tz_C$IgOILkP9F&;hHI&xw݇vVqwHoT}ib3<|ۃA=y|pL_}7md߿( Nsp;K&^+x&mө(`ߣr*Y[:k^$[βAhM" /E [dd>f,Us*Jpݩ(l70vsc4nhBFrlp\1S/&sH.9?|~8X3voP]kDR%MXt F8XL+}LʝG;gƳK! i0NB~wb?"N%vL@O/5@w<=r^X/r^X/E˙I1R0* q >4 q, d &iw>zB[U٘ʄ(;PUa3?K3[|?pA6MDiQѹk;z B`k (Ҥny5 áeHZr9 ZF9)#v gäR$|Wk)p5ŵbuękˋq<2:}ji"GdU9>hE&CM ㄠ؈1Q`;84έe֛9vf\%ɴXԄă0& Eȕ )D? 09v ;SpV;+R×RϽ I aB:Hv'i9R*r!}$h󣨄4QzRX qbE"TPYD+ME\]Ee}1 0Lձq8PwWFULv#&< Tń= ;=C%+:KrP)Um)juiѲŢ  9זeɚAT*:m[Rx1en۪vk6±_|B1>Ә{T/r>uDdl?0}ߎܰ^q7j:b1TKQ"dbwl=1ʀ.K1עuW1x(AҴAnż 4Fj|f tkl9 p]'?6Yɣ,&'&`fW4?;Gsԇ>gy!KQ]8ZF+4Q= ۬fE)igA`U|t9̟@^jiH|09#Bc\ 4 bh>(,̪N!Yv0APvNcxWA:{'rKo'zŹ}1~ 앩x5#'Z=>hw2hN%˿6i?;^Z'8ZO=ps49V yS~ S[uFn.+.q}}&?*`b*(bIzqGe,cm^{]y"f>?MrGxA+c6>4X!.K3HV6$/<])%cl39Rn|r>nBȝѶHU$w ?xiC-4kӛ0o?P7,h6:7IX98lnG.M2h)!36NÙ#չ6LL\Jk4cI*%NidnoJٸ@$.[, b0ydI<$T avn*0nTq x|f, F#/"-j=v@ "]] [xh^*K#vj'4ierk睦^[I>[;V:Ke"|Etb?u g{yiY$YNU6AȤS"&fpȳ4C)c& Պ"KQsjEv^jE+/;j#N!5)!: ܃vFŶ~K1/&đBH Hߠ`"LlQDA S`q ʄ!@Pj .}/ l { ?4ľB040d," haTFc|h0W>-IA} FU~iRÌ ՄLLҪ6YA7|[S*-bsc߬ucpؑQA\rrt q]jpV%Qsk ѓV fc+?>sܿ3?}ۅYB Ĥ82)%ctN9X(EK;KJ(9s/OޞW#rN}|0H5:ߧ=^\M mZ 4_os ΀ ΀3 _o> ׁ3j'cXc @JPiB%#BB?؏Lze*Uofb7RWa})hZ޺ ^@ޙk˕{yj^誁gfD*0l9^*xbr>](t 1i5[HlK4  |Ufu}*FDc LL"hk""1-|ǾaKHT;0'4EQ[d۩}fe͝dڟ{j&.S ,HTn5k dڥSK,eDL("a`m2Iuar%{9F5.Pd.Hq aa鋎I/wcYF^;ɮM|puQ65 9ƞ}yVA!bJ)ַXAXRZ 6okr<5G9H7֤ݱ%?lox20k*p6w& I{ +S9 JĀ0N()p+0BbY(&_!Ehd0ոDIm6~Ɣ 3+0ZHPQa&U֚-EAEaJ#jŲ d&o?0H+! qM (Ala4"9S>xL#,1L3R(&*UAץY»9HHk0*f>QØc77Z|)H$"RR拽ְvD[ۦ( 6af(Q6dP4[*( Ƃ"FbX n%E*#Mg^oMNлS #=x0 2Yzen!h'MX%tUEGyZ :JD ׻Bd#8ǐ&%0a*.g5aP=YT2v5 T͒1v@ Ȅr.31sT"t勔xIcK)$7րT j5rB\ u^'%- #n]}עr_b6RQ[БdaQH[2VvbEƽb ( $dCuz55\\* 1H@{GiBD3EךTiŦ_Zb /ן '|k֕=u9=޵QM@6a>QIT¶d倗[5W~2B)|(%]23|>g7)"Bj\Ctj>jNd[ih-RR2Ue0 NvH br*j"R2WqOq3l"O+7?|O@@fH-c$2mNg/E".xh¥Oa#gz8_!E(bA;1M1XJhI)ٞAjbjլf+&l$dW}Թչxp10;:V)=j҅fomy_bV Hǔ2#i3t 1rp14=łOfvt*쵂:7'" *J"1wc3테v~)GŘ}oo vh}(6wk.0ْ+]u=ɶ(\an,@ 0LI/By4 `3' \11`YdaB 4 5I98m%/g~pQȫ8c6B!%#-FmEP0F\B,('ej}PR3^$v.]r(tCre.ks&B[ܥuNH qdA2x0EE?z-Ҟ4jy9F/7nL#3(#`Jr))3AuYa1sRAo  .(2 (qZİBqAXs 60>1sT\aydyڧKj(d!\:)IX2$rN%^S 6"XD} WQ.u>˭b/KmAH6S8+G=3UZ&ABu&]`Nހ.AꑦD; 'r0Z#H! 9-S^(8s(:%^.t @iE5CO& MAp8ˁ[.s\+0p$I2v9u[㒈&Jn<`v;~Bg@&=ι^b.`)0H3|%B58Kp~m(@leV`jE-cDTcVz±KqBg\tJ:zRԲMl6 j(LAE0hcy±Kcڀm~`h<~=22&_˘@|YM ֣ ^xNx^3 &)F"0. 0PEs_UŊ9Ǫ9b(m +19z<(;5 %Gڎ6MT%MJp!R8 S\s tZ!T!轇}Hxf13:W fn'Z?!VN$X~l;H&e%G2R"VkBmu`#Mpa۰%aL_1ׇΐۊRԧwzLٺq ?Uj<#}]z>v5sJ# |&l@c% :Z8ΐ5DWܢ g{BDrV`j X[ʂFяǞr.߂ P|oo˭ƇK[^Kcd7(XW95mcY@K[]Fsr& m\WFHNz&$/IJ4?%{'f2Z*l@QDu2b/s&k>.Sr[ޱ)QiqZ7% #l5+n juBG|tKV9-yNUeA5++;8{Y+v"l.z5D:Xlq >GH{wF߀9)/ti |b =l0(8u-`ep7| ՛/Q\˦$cȐkٝ/%ѶO:\X5 Λɞv NhH(fsOJŏ{m8N$G+&|(>QpWycp>T,҇',r`ݢKb[_;Znlxh/&"")$-b@fn}_ys(vu_ߗ>.8&eC.6߃$Ҝ5::[uu Tz:ps^]m7jcťCRr6G$v[݈z„ HݝvL}4;3,*]P]Ji/{9BIolʾ7`Qy l+Յ,!8- !qMy)UkgRY) !0ՈDaeT]aAJ0 NM2\U^W=nJ5&KC_"XnDqu?|?>R/{yE<ċx{7*ŃoUM6.>Lpⱱ2KdsMzKDt3e_tDoy2c*;*SidLR ) EMZ?% i-wI^ҐN؈ԍ6NrvމY StРUۑ ѧo8J^HLG^ To OdRc`S`Q2Z2Ihod۞jک]XgtM0X0y;%$UٹCqkPu\ؠI١qHrWJ` /N(e9~gɎ]2wV{ c,&tWe,&|j,&=cРzsfrIB{; Ft4Sz픏WN{<h_#ͫOWwtNB{; 9GqMjAԿ)$Ԥu?XgN;{pSHp>$Zfع;HM7yiCu2/<@?pݟ@Ο߁=RpO%{7I?E5B\#ރݣ4Of[2zud:T;4T)$uG !B{,3|-$.c&QƩ圦RY7e]|VF)cg<_qJjt<73_;Ɠ˓jGDF-gE<{?}=^πfWs+%f2yLF3{7ZLx~Clu4mY/Faߖ[? ϖT3mvA4ld.C&؍ #&!<T͋ټB(sirh!>dl˩e7]|n@[IaՈDfvTjUYmWQݮ^m-  [pk&8kƅƠ‚G4qL PQ(iAl񟖻J5xZLWדI̋p>4fbc]݆%%jǿe37 5!j#8lq+-(gIHN/rQ-C Zaʃ H[<u* aAM2V-U%n}Bs#5Wv&]4-&{JimG/ r7V80&HᙇcZWc r {qrw9^d; x]pV pZXF-3ˈL+  bĄ9q$ &p),wA2{w9342&nldsR޶s(qAO] o$P+P=B Ѭ#3nYjmemNUĞ7S+5zz{N}v"c3-C,X|v݄۟ɢǦFYh!ƞϧk:.,@Ͷd _2ѐe4d.bo@(/;3>tvcz^䅽muh"sUnN<Pp/Xg?fZ8G:#@ /$9<*+E)lK#)[F:p0{>}P I5rr!pUA?."Oz.8Ml8Z>VZߚ{[V|8b̯ PY\[V0NQå (v7nAWv ?>DU[V6ۻ )T4|p*i\Te!*r~cV7jn۟~`Loi> ='Lo\iͨ#:jZ=]ivLruC5䪤A-Fqa~|^ii{8Rr=V5Nأ4$PGP X\Z>NTuA:VjH$q*ٯydQGQi@v?T]QjTuI_J-݋Dٖ6^;xf >@0^9L'tHoxID_ @%.S0=v,1J+I! oOcu">Ds_QɦKNv92reӪ?{F_=)| .% 2{ncHwf߷(v[#PI"*0.E%(j.p$zBCqe=?0~%~.ugNإ>EPw \Γo-`iFk.bv5'%kPu/'o+nr;:b,]6õHwa4x N_~IčqZYc_`^3ACpM.)}hJϥ őamlO ncCƕ~wo݇4rFۙ k/?95^=5^Z$ [lk;oqyr ;lqS]sq5}oQh M7_?s.W/ۈFvmj~fwM >&?l׊Gwκ4zA!ZӞ^{z|2nlhev]Yc:AANY%z{;(8=XІmT"忒XqdOP2MN0F 7|Yp Rv1-Uk>ȇF5AʁweN)8DFkq5cIu%jz,-:K.8[Sx%]YbF'6Qc] %8ܸ_:a翏B⣧Z|؞rLq}RQ  xCF,Dje"w#eGNzE5s<4ބfqt+C0-֡Yr V5Fۍw?o<0 MT\WLFJݳ9inΌﯮH~8/MyGgn~Mg;s>UfpaRg9cZi?Ag+1KS6":[uXٱ4饧_F_U+ ΎN?.txi>_[/kj݇y^:}z,~}g*nR(ǜgӳJdOR_C}/ 忷g!Oopwg;(ꮉb5fVĞwV퓧i_Wh""T19t @U(v_߈xi~%?9jkf`32aﲤ#ř.M(-S: /+5ngkfXy;;[޾ Fph$e҃&7tmLr;fu\z5Ğ]8 k-h_+XŤ5s|C,IW?GV#50) Z4˃֑38TI֖BE+'x3'˪ /+&Ό'ϡ=_4 mse8hCgMm8PLTp-i.}d-g;r;KKKV==wnB͸ḻO ~V^t]%ȖcpV3zr QeO.Eq /KxQ\‹KU "3HghafB3ɳo1IMyRfU:Q\宓/Ge 1((kbfacnTJnj&h.DX6N)@@ 5RZ& d3hdG@onR,O3Yf-$vx%h*`Ιl\7pl105}p`IH&'Ay5Y$,{tJb``P6R[WiW>-[{Eg0Z$X&9cX3*d鄊 XoΥGw:"dj+iU 9:ҎYSB@#ZKP,G&p80q֔X\_8`eYFfᵥ+ rAFp,'@&MsqrP!--zA*𜱣KGedc L.pDN7/8mudalny, $1d19řU|YT1gd7f>> 8g &%R>¸l|.R!0afIoPtLc2l(wRMdQ30(Ԅ+>sv&ʲcVJ~Cy-aeg  z'u16$ȺE;Q}H5]BzQlF b)yuv#ΜV'mXW+2&.uI&#\rb<;4@DԖFHO ϤO/ͳHқ*6tQbL!'ZeR-0H' Jv$H% ! y *`J0{c}5ï:DrquYߙCtTܘRm~j1L*cڝ9UZbI%My'f(b u8#Z!*[9j ZV*.%$>H1)# EY% _[tK-*J]\1Sx3v_FsdC'K7oiFFy)2v!\?IWZ-˟;mU.Kk e?ןݙN/}ԯ`D2Lk7tѯ.Ӎ7ys` fI6&${a 5EbR JZu!uݞRm4XHnL+zMTaڻa}@ntG9#JaqƒY;Z)!-h5s?kcLBi<ކ1wOoap "rH?o㯌V|' ƣ|9%ċ'^@+u ^^۞dFK^]V-xvR(.c pMTH;]ewyu?Ig,i3'xoVL&οs'wQtEqE]4u\a2($=h1+`TVI,r41(p9 @ )PZW(YwHW<2Kz54J]O͙~Bi`+oj Mj3wD[ +sNQ;GP4/JYHZ,A;mtXBOUƌJ8#qk¹+6?WQ4&w$ݽ9޵q$Beg_ )qŞ9KbYR(&?$% /CgCOtWU_UK}wsg-{|-ZS|~Ϯ/Y@71fw* WTfu֪>jezr _?^gTñvO/vV ݬsvbt¶VѨMMBlSn(}j4CH7<:70 5Ald*=:8P"kC^K}WT$RXFSv APgҒ8 dB[%6qC P!YElsd;}P]SgOP8[cʹ{FQ2䍲0-xQb#BDDH"fƓ椹Bb"$2nsfR!$LfR{ dvdHt%3εP)3D"RPݎpR-.ǫ3!NMgj:pn!&,U }濶 ՚&u@ѲcNN ,>M{ǡz\cjmf/P~ZJX0"4 4KD.z7Q:Uj9c΂jjIhV5$J=,dߣ8*%iKmKZ&4ݶ8&epBx<@NT0q+v MW_`z:⥦XI#|,/(JK6<7SVtW_dR&xQ_#( E5J2{&8e>ڠB"r$'?-.5B2@^=46/"Yn6ʪ^>?"""y(%SGpaa&{,ũ#vydȘ:)c~SfǾ_'o]tT1)Yuք!KC0F6e immL1EUBqbQ )``EDj,Vj0Va"bX*i%EB:  >yQlRۿf%hf&ƅ sպ$Z%müZtaWϋ3%]:' ]sWvUhr)tk5¥x =oEUp.3\IZhz~O-IU:\Xyc22-X3qCi}>oxdqؔh0fxe-=+zŕg!W8.T kҒw ¡dGL?D;S g'8 W0CB0~(svmijAmK/t3巂Hj1zO_=t;mJd,u8bKe!Y4Z`sDh0_fbM>$RzEf떻,I%ഉf|fII3ѳēd657sC5]Zz^߾N`Eq$aUVJGgwn^ښRVHLn?^9%g*{}Nl. 5ZkJg?(Luz AB|0|!E(uzjt5cgBpTxn=]xr:nE6wäFu@VA锎ƻ0Iʻez.,䅛hMXiۈj K߿?v+S1N([#ϽoϢH`\rZ<8Wܙ7?=_ }#OY6حMΠzҜ^4W_Jͤ'B2CфXcD:I,wV!V)P%,-[fmbK7Ɩ (1c)uDlzO\WW䌴rJъ9L *u0ƚvrݮXdi/c)N=s/+B5qa~qc.A$c^!Uϴl(z3qgJTlI,|CCQR\j^( E&vXKw8TAphJ-蒀WB WpZh@_pE3|F W;Ы/ U+i)0gm 'i`MK(8L-J@촱]FxMzEq?eYo~v)@Oݜ6 HA)h{&|Lli)Ye!eV&ZiTV5JZaKt3`~Ғud@f %߯W綑N3zAf3>Uj }oj&B(y˄V*bȨm2ΝKrrnr@bIߛP'/RE0ҒQ\j!4&ɎllMOg &P@8;3nϤkPD>|-!\]BHBQdž0s8IyA{ҀN]V{u|Le^]sj,%qI*Ih@^ qGlNkJaRJaqr LW3Wv1qZ}MI9#n9=&6~@z_'=]r(ѐi}ݭ(pO-,4qD%%VZ}J: ֛Vsg)@!6S2#ڣ8ehK\)Mfz rɷj"&MN~ #u@G>\>9P+ƕS:xM)꛴"z"0=kh %P(Zl#[O|\q\@j!5#+(K4:3 4k@}oiw$,8Ho|(*|[qAXZ=u5YڱLM jG5џkD.cZVxZ1xҊXJQFvX6G<" Qac^Aic!yF3E<}5f=v |c8Y?C, F"RECL!!fYg8UQ/` VOCNr~evDϵ> =IGuZS(HSi"e) SH2BJd-eWn&#MM1v=f\x7/N-|zև7Z F ĒgAn__mUQxgz/$"ҟ-vKY/.BYml`1֪[B3I W[H7$ZRHQ пLyj@ 17'm#yh2A5Lqv}?+ $Dp}Ϳߏ{.Şt"]침_Y2/ 18a:"ϩgA ODnjx30Sk wv9/d3ްQ8pVWWx-ׇ͗JDR6')cx,HU"V]K'"s( YT[NYꑦLxE3ˈpLv 9 b9}J*vySib:ܺn*ń>ȝ|!B(\olmk[;wsjx*j,{Y}R՞O.zLJ9X| cDdo aǗԝY}{?r%?M1ݧVMnK _&K(o0ӯo\CT ntx'~6ͪMbM m?":8N 厪aև P7[? y4`ϯ~.,`F=DD\yl2@SOt::2oT׃+k11?(4jqa:bOx o o G;/|Syyshg ,{b8z_"ӎj2*WjF- bJB\[߰kDu=/p3pǰq ą2*v<,K\c AJi,c8_xLXo"R-؁IqP "ΉeD6LBij;@a^,>~ ORw}ٱDjW%M`\'%<=m,x%[an9Jt)7y=ZϺ@y;emI@ oGBGhOHrF$x*u둢㆗*,yMt[nv{o-$!D(Λ>= 4 K~țt~ڐpR2 Q~[ڈ)nZF}ح2SLy-n_HgSQ`ӻӬ\<8WӶl1wͫOfz~}eC/:f=_jW8tʠWvJWV?{8rf/._to ЙKxcˎv_-J\H*afz":.߽&7|L4cdz9qzGL^$Na x>q+Cx+Eᐑ8eǻ)"03c9qݫurۄ'4Qdaْ"^}n~}uþ orhTBC~]{mI7 y}!2\Pnwݸ l;o8#&}h@=h m"k-3m-}np5b]kN̪N:} |U9P1;ڥ =Rh)(Nhg r1)zxBqvG8ShHãpur鮹Ywr>}=6᫓y:n41!HkFOGpQ h5ile@ Cܬ XIm! }51h -jH((8:GB"5 /"a!KE0p$faJ3Dhwl 43R'2./'N%s.-e?5=c8K_Gn,hSo %2ͻ FjKE~Ug \faT²eAЂKS,nv7 D?CnSؖ| rw=rvbvv2Fr:mιHvE֏Tc"Q(XļNKBrG 5u?OG_bxJlpbS1A`˳YRN8;n/7H$Bٸ ]j1D5#)d>)/2pޣ3G8D))ELL$U)Vi^RePZz` T\N%S2)e΅j2{t='ӌHA2>66Bq̚{Z`^{6daH!` 1T`EKσ%\8d&3D);l *]"rj lɐ`XwM5j9[)15nӍkhٕyԇ˳k;N<.Q.\_j$SIzSօ>6w6S"izS$F$H2^o<Vx|cĐSq]m(xh$bl j>;V:br`;0LV.Nd X"B w @c%Brr;a3sTc1ߨƢ1FQf6.8FrqlCLq$R8 bl~\GE'xA[]>_8VMkʛ"{`&yK[kAi?6!ruw{qY7vf/ٸ ">y Np]w1`ae4i^#rq sq뇟1VPLmY۸K J竲<q{'S{ m̏b(בݍ6%G|&CuN(LHJV T}Dձo'<2CIOm'5Q#aoK2V2bhӧFxԔ7m.;d!LA"T /(d2y'Fڶ3lWHݶ!!̆oL T/EY%z +mF&VgDx$-*/Bx^=eCaLF:[rfkz`7.d\\ yz P!-kPXLq=d!T'C[S)P:Ã:P42KI80+L pps~֎CMYrsOG3\gTT7F;Og}ބPtUF)uq9άPp'/>;GW?VveA*Cis 4Hzќsz{Fj7Ĭ܌`G=h>\(5ĬL@p3==G`65R` )ةDWYfXZ{jixeEh\$Q"SJ|_ܙ 0 Q8kO5(e2T-!y]E,W=pvO` hsYŒ" ֠;Mj*?h }6#*'PaE1%w:| \kID)=0u>dXXCbVU,S\߭^Uv|zTЈ%Obڭ_?1F0A@g )<9 _7­z '0BY[h6;wVnV#HyPr):dyH`>/GHk¸ArY:;03"%uoJX7DX&4AP& (GkbFB<5#BJěS FQ^v5̄L|t5;4 ƪoGX dloܴh@Lv_V\|cp;t6MF=+`G1hVVFIEG3'w@Mc- F;@r9k0Zc^ r=F(HX0dtQaj|c+AH9&u Rs0Hi">Vf`tڳRiN'˘kɵ#pK ȷ$}@QYq[,tZWX۹x1\TF6HOvnnLʘvZNR," JsC]b[NP3 ~,7tx{F ^9qqےU ,(*5XBМIf=rQ>[4zM_t T3'Aam4s` M^ՙE(rzWa꿃B };f3%/%"a^'#u+)⥍ n,2$K܂@D%*/*SD&/oR2 .,>qBF'bx|WI;:1H#iH~D@0(Ug&vo8,K0WZLmVv$K3ݍ\xs>83"b9HDpߕ!ۅ|wz_5k; j6tp)xg'~3 !9h)9Ʉ`3PhʆEƀ $p 90s`Њ l< $g5kߵ,ZWɊU{(aF!SOF t5ffvzW7jI^M*KbĔN:.|w%&D- I]_0Wd/"^w1w/ҽjP[]2&XlUG!;z vRZ^Zf*yw'"Mg(ln ۶=]j 2ϋQKm\|UUv}Q'bM%̹LyoSR|spݙǃ7V=C?-Ǧm}cD;D܌[ic*@N1'F5!-=~cPJ٘Zd>]pY_pY_pY_pi^`Y0(R\q_dD*ϐP*)9A,@ e)@c}?ԫ`ߏwRObH{3; 1x F f[dh_ p5ދuoˇpBaʤdZU'*ljRYNDUy~A+ssń+ReʅP$eI 3aRPA`VdEmQr?>@ңb'[&ˇ7 ӛzO݇Oo>m?bJ}&<6Ê2sFJQ="mL-hI ' `+o}#6:KFY2'%~_|p7QȌwreu; 9G3ȝξzAêug.jC [YS6_N[ 7Ž,y.<>zB!Iv0ӻɿ)$,bNj͒S q%~x\чx;-O/FQ?O! h>Բ4wyfh`[;\[;/>B\ Oaƞ%"KIx^˾ǽsls0IRVٙWYV5'却^9]?d<O<͓5>,kQ# bЩt)G=NP;?Ah Lq~phZLIͬ \lvI'\::O9=Lq ^e͒eYb,8i2#1M89"IE-gL0b ˆ8U{'}5w6xF6K" !҉v/-$Vq4YST4M(&&1(KbOy*M8BF(J/(kB0BN7r﫴~tuh_&_*L5HP֙Y. nyiC+TuP_tE2?s @;Kw+pFݳh2}YQ4͉uբOoիo̽<ջODIqyp42Et/y]iUxp\b{< ̪¨k302R`Fc R8Kxep%(^rKA<\S]vdLjI5 @pb X8l3Q ؅ϧfW>v'|yId]l0v8sB%! /*/l7 .ַ`xKD \=OqBx\Td\Ңwa'`ZRhBVϫЮfjr8iڕk[yV)OZDO&Bk 6?̧rـnưPYmX#b;;: ]č'󥨎Km׬dʂg49%.U%nThGۣ|Po~&bƶ1r"' m!c҉ >nHs]0@- %yCXvX|˚h ^`׼cȐ+;.֤=h;|dx6{zEc űZ`_-Ҕ`EB@Vn^XD[Y[}  ՚# x [h^̞a^T@y3 u-}~ןr|M'/I'-W7pCh .i6H'#Nyū]]SwBd#Bu$tu)ŭpJ$\,T3^ D7AM'`IM!b,k"p eսYtqjޱNkC<{$y jvŏ〦q!*o/h9qACe^_/M=Ryy}jIǎ~ܨ˲,H>v$U%G~jMyp:U-EGpI`Zn S;c›T 7w(ZjD~Z~Nz[X+`&Q2o;mTO𵒲u)BmM&-?А rJ@=1: &NyBM4=']¤-a&aRz)քvߺc#-\9:-V1e+ R2V<62\$./\'1M&u2HM^ c= b]̧q& M_C[G`u,a9R2%,+V,W͞+ל%)Nvp3 c£^fPmN\0>Q:&GקkT+7|NC9^3Uf H DJ_b| `:kSN=*Og- GYXqun%xځ(lH 2s9$wkq׹ƭׂU!`F]C2nW}[!mw|!#麾;޵#VZ;z +}⎛؈[ ƽT4Τ=$m [:?Ymtz# U(-nkkqH"f]Rˡ(@:Xݩe9w&N-8*_NRu6X[ᖣ{dCz` wB`ӟ5Rҟ;+.߯5WdA$ݷO:c[˭9ַ}18V#G78vɟ>}Q='~?CQ&,TmGisV=1ܥjE[a*?_$pF8ir!!sɦΞ В<20SSճ7MTtFnNtKG5bfGtbg@ͿUA`]f[^"XmB\&U )> Dpzl- +4B)CZ ](y 9,mwd9 Fs=-[xIESujDMIJOwlb:Q,yXVRqB%:Yh x0"[؋eiSnJe:ELv o[!:ۉnuڂOug0~H}Xa9VWYp5 ^ϱ % ʲ XpHLNc-6dRq*^fg`ssyѾL ե$KJzXbM5&*@j3D9T<6iq%ւњîT,X؂`5ƈ4%O4mI"UjbcPZ"%6:aG QrKmxVمL񋬴!R)] X l¨(BR`A52g 8f8Rt*DHKI.BV`~Qh&W߽o%:'aIHr>:ut:="e!`7x QnF<~v<&wqo_qc5<=*qY1] b_}F @}ɂfO0C硿r0xEm2 C5J(JB?DbXlN4F&m4&[@*&6sMe):WXk2pX3i-V,hz hlBU;WUWŸr~oֈeT7JYU& d Df2}߭Յ۰CEǵxEm҉hVQI)7nTs)N +ɗ:R W}x_ 1 ڐwn𒒇#pXe3Jp# m|dN wUD)mzM.ӂBRb}؇3b+'6m9},& _PVf 0. l ޠr=ZOZWC/Vii0|Fퟴˢ+|f \x @{^4qYׇAAErm%c3]hZaHȾ래}06B1bbZ?=0QC]W(sH=uァyĵA98wzj0<.~i5{|<t^u{IeV>)uM\mtpLfXwRso}`["hOom.JcFݱ\u:߿LJe8O>su 0y]W4dĿKnʺ%S"_IPcj DSH46a܂'d"As'q/ 1U>&%\rίAӏ `y3Z(\?g>|x|hy[~ B͆'3}!m$Lotz=%e`}Cy$k(c͗Mџ% !<;v">X2J 91d U4jbc ȏ"u I+\?o/ZE)R(~~ Mlȇ!7\|&W[-ڎmq-FwpRdrQ8 DceR[ RԤ -*%yg IutS'hY׏ F3 }Pfhy ZY翆3[Ó0Т+k?νX D'{cVQN͗I9ዟ;<*ZqӼO}TU׎fW Oޔ#Kp8?րQϚ͠WO nN%s@kO)1nN{hjBb$AVpǘlV X8C88 \ 7I7^1~(mˆd(@2ERJL!1%B (f$Bx@Q/򞤞ǮidsҚLeq~.cQBqQڕUi[\Vo@WDKq^ fg "Dq3jwoH*Bʼ} ¯:{O9ߏ" iNM>%fm ~"$q?< 3ejc4{}P]r7;<Fg# h )ⴙ" fj sRhXhRǫ)vDa(VIfT9gYl52.`.(/- [:fJ 5"ws\Jsf1 =WaH+ħ[Qw&Y!1h-X,am . <ƺa`  `$F&@xT*qEgPVZ\ИXkb8$>*5|)qvd,pU4%1Rr ;7<=؆6XMp`5{̥Y  ҅|0 ZV*@bREȉy- UY r`: `& i@M`G cqhFCnZ9 d@[Y ػǍ$WzY`Y@?nc0]l=5 2IVˮ%U_~#*H1IfRdfeAsL! Tm&#f*InD$9]Im]~wxiQuesiGj" UjlT uTSJrf,*QKC@M8dƴ*K I \3->3e6g ?3ޮ)Ni^wsFk1s ]3%`yR*A8%`Ta֡k$ +97'8LI*JBI}ecK)VLr&"Qq`~b/I Õ4G'š$)~9O]<>ަ`#?Vd/aofhqv5KYZ6in-A l5ͽ=: ١]uàtc(~X|/\ؿ^0@ zjcړ/o3| g{` <>YI/ICbE;J\i3 Oܬw XmZ"?Qh/P Q{BTGgGrSfx:~9d^޼7N'| ܯ:S$Fj: OvB`c'8j>gO?|A 9~x1ѸO̷o`v{ k/, &iOD=L@hLa332ES@~j;CO^&_ԾgL rڗ3Ψo&Oå.w|)rb>GXK̟Lg~?Ngv|~? gќפs{ Ddk,jUTJ5\bI ĒN,7J1$IEa%5Ȝ wD!kZVn9W4@`<`Uފ>*~ Kh`QcK"`i{&g(͒$k1lܷ %'1(P1Ήd"% '۾co3+# gH? UFA!XHE@p4Rn[6*NAҪUfD5,"w17~C`MV(QN4hLI ˔k&x&YJԉym)Vv Y<{iԀnAB^s @㱴oW_f(* >\b-b2̑)ID31ɁW(6 %f ฦTJޑsrK(dX %Zi Ɗеvz|XgTr]9G99V)5$-3'SdBј<5I;J~u]L܄^DAݡ\Y\+V=+)$ aᛅtrw 0+u+} uNދ)*wgPQ, -Rj.0mu|,y)TQQ{G͌LsoةӾ7ܹaS7܈4wwj6I(he&<q 7ާ4`Hiq!4i}6URhMPbU?[T_JTi-kqsXhDY/+={ $&GWǻVh My6v-JPk[eAKdA v{"kl>irӠiKW|c v262׌SSvUT$sWNCΊX~C*wQ7^Xr\G>MWߣtpSWX[pD9 SEwxoZzpŧ1H (v J!j /٪U>cx}ݘ OllY[g~:m߮g?WgnnU,?g_V kf~mMKZ%R/;J/!yO59tE|BFct"VN" }G#ӽd9gAT`1$;wpS s !kWUUJʞ=qSFD(jZ8;:Gcz"0#"%ן;QD#Vh)bC(d˃zSή)Oa^ B ,&k=l3S(g(g_&>#O̓^w֝9a/&]}Ĵs (. e$ޙkY~P(t>sӡ>,OkԀA~rC"D'ǃP}zBwq\!gWi Ѧ@C.G.k$l\}E2#$t>W*aDɱM#sn8D|T@ր[)ުs]ǧOY|aNk5RڕfgBhk)pv HO5T&D yi~9O!ˁǧ?fa[kv%-rkX^^؟ov)- ZE긼 *gewev{?1[r~so?s%6`,P,+,}\7 : \(tlQ ?, fa0* V8i2 uma|Ud30{͵<R~l%d~nⴿq,_<v^-s_LZAh}!m|/읶/s7H05[ǹ+,~[Bai,Ǵܝb>CZLT$QCD!VL_^&},mqCi+!]cqL4 K1(,vkc9OWc=Y.g(^5tE/l1Ͽ<~{1iBF1RP:9@("-G;AR=r~lyRMPN4.u}5 AE>#}iPK7 P];W5+qa=N( )`QD1FI@g5W]};_-7sT@H>eh;:pRگD!I$ tg''K~+JCm@kl)VNOї3nKkPI펶lhpz$w%KI" ZD8M`FeNqT9|$閰b<\!0MPٽQ'G$kΔ9vk\T0J Sz :w@ "4O Zq Q\VZEy"&,% !BW fc[>h >r'حׯATj0inoS{/oa@*M1A}nW۽pd'@_{{Sܘ)X(ݍuVSJodpe>-lRPHЎ%7T8bp܊FKf3&M1H4?rIjDx/Rz)ȵ01KZjiJI# XXaD!mQ蠴 ppyi[0ĠN&Ǝ)O `=x#8_uj?l" T0/JֲgR~K|00B.+RǹDw|_ J :qJA^WENXU $w X $A4%FxMMTГZ.)¢c>c -C]fuǓK&RjuqY|%_@{I)zQQh~r tƙ悟RRӜqCM2HThcCRB +/!`&k1s"ua<b2v,zp .P Q/+26F:k:m1#8:X霵%"dp7t 8ʠDy?PǓɡ]vCIĩ:/d]&pK~łq_'+ӕWTx9>K+s%'ƕDŽptDŽPI_. f竟OUwoZV0u<]kCoKr|d7ƞfXC}3FQJȳNEg$4(H諜ߜ>yU)]ҧ 5A8>6FBZ+H!POAA 9烙CwU#T!2i1SnD a>Ef*vBmzB,6Kc Zk|]>- ~<0&X^k[N9}kz?` OKXb>27p`&0NR0d(+8.MaG`}t߿IV-b:cgj{۸_S9p8$ASA/n {9AJ~-je)&ڝ}f8|f8rTh}]QL&Y ~C7!$zZ-)+{J} =׍5mjҾ8V|XFOT&GR [v20v&'na{hĭL39ł*AIRQRҐy'Gtռ?gպM|Ǜq#Slm+~tVQ82>0S~O~&$[݆P!O*EJ_}|?e_ڜK {񟃯_*A礋KB &RMl{͜"MU;sd&n8N@!W_\zɠWدR"UY(=M !1[ Xd2`&Rvۊ|Ӕ_O]߻7Bl_߲9Y&[ܒ`|5A;8{q0?i7/N·0pqIhwPhK:`.kB3*=+Vl3ޕhWkye"XV-\Vq&dM|y7yw?{ K͐ }a|OV7P[7Cj\3~a]QsuVOy{0m0=!O&1F &TSKf x7ף]9q8|9?+W(IFo# 'g:J/[ =4C!1`t&,c\'Wqw_hh'KbJ?tF?ߑڲ7'iI5^S(V@Erɟw4s1Y}h}óa,|a;8_ɐvh+LcGN$S(py8>>Fe$bE,6 rXD>jtvk(`)A[b?y3QstLǣ3iͮz>x3X P  4ȓ|n3Ro.?ǟ?ד + =Yܛa` >Һ C~(Sw]:f9RgIm=AL*u'|VO8E@۔jwTfxGl3لO1J(4 */KJ0^89Di}`ROuJt}VRhG"z,ՅtP(W`x[!B̟8"pU}T+jEQ>u :j2 -q<x(D łg&P JO-t}P'+2R͜?й;@Q  QbzyDedJpeQhR .2&V@.)g!p!&a'id Dca`ŝLɰ\p\4+XK{1Mt'HboKq\]^+XudLmw0]Ufgc:Az8$R]cen(s4#5D{q]{}ߟ~R{Eoߒ6;t4щ-yt>_ S|;ͽɻ\^=HdZ ONo HX3Ly׏~8~˿͋^07_:.NN !B<;vMAMsz__\$y&Hi!6aSm{^YnYbYr59L!=p0}11pX{]M%BRY$e濳Zh;F5ۈ*o@pHK8?QL| zBNŬyD[L(E+ygRr*胵9h`pTq!+^`P{ s[ȁ۲BN6+MAFkEAZbILNd6(3ӿu y_R)P!&zo&uMuAg7R1Ź`^ kް kplѸXL&d\O&\7-;圫93=[Q} Ԧ^qrh1Y,IS6B> nuʣvZ~)l_ߖҠ3{ONǕ՟}~JB5v!5\tml2X:'/G&_ x݋gqxvgF.37}u5{F]4YA{et흹ˁ]1`儵@)ahcUdvr\\tMqԣ r?9#v0vĦ!b|2P.3shrQgs t&nq"ZhhĸHf[%Lj-cTI%X35N%I T>QP;]L,6fTsfȒ0N GMhm5Ict֑2d&:%vK rve:r6[U;,9:FASb !Sdv=5 )I'gAR&pd(Qc5`f'0tC(kNfvLf NT)Y%;]bt_G'"86\o^$Ld5O܋g.L'jB _wo<qH[38.ӊ/`:^Mg>[b:uꥦsuc:@,RJ ԭ-E7uHvc[{i[كO'NJ?f\_؎t(\g͘IG1F[C,ZǤ`[62n'%$L7)-P[Inpȡ%X"l[ȭznOH::?oBzDH7SV?\*^qf?|R8[{&ڮb]n%yq F0|>ΆuU w?vvvTf~=/S7̜AK| .{hFގ7bfDhuٝ0{ty9"e]?ci>\T;?=2>}tވ{젞=5Eii/N$Zs;r"uAOy|\ mmZ?u_F9)0h<Ѓu5p3sʙM V%ڜV|oVhaݧIi;hMfN{5}zK59P{5հZ%}އd 5ޭnS}i(9ԏUbiQnGkzm.I3 JآD+cJ5.7r6Q9g%$J4o,ns; . b6<M0 S0]oz֔ _DͨUP%'tIR3"$MQuP]ЈS*h~T^+)lŲJyˈJx4`̦xeKeQ d/TwXZ Q} ZޠC#%k;ݮ-74?S+)b+:pHHHCxOQup&u~x(`h, M6@oo(p΄,HRQdçS #ɓ D|f|Lĺ'URfOo qq6~vArSxE(́.6azbQJJvڥyy綤Tj[gHځjV[W I;pNA}u@f1?tAqJYtA|l5xb#:HR!5*>OwTw)BVbl"&H4z1kuAH!TjaiɈlung<^,w޺̪{:x٠]rs_Y`e# D\k>}w$ .8VG,|}xre2"%ǘ ^"9X>1:h$͌>› C;F%ÌSۧkcY"rlg,gȋ -+Rr=Y},^_Egj0_?,!r`^C xx:qOor3]M;^Ğ^U뭽~J!}R>)( H{A-9GA~tk$w&#>H"[}k`{X$\W#.QQCJ*u`?3{3<QaLЉ9%i1| [iOlC=ߌOBZ 0D*y]|@a*)x=Ȩd`U|u{,{XKgnZ;tGO3еΔ]MHK8gx}҃kXcoqrd03{>]xg~ |I¤ ?U=ViЙo&&tI#C/MIn ߚ >pTRJFM SRS7S~I{N<+5 ; t5wɃ}xe17a[8@Ʋ|9{ rsfZZEQ ^lmN~% Y%aPYʧR-h[$} *J;ɹ8'!\^C^rOۆA!ORڐJeW7[IW7+@|A%; 9|sVn' \=Nʿݭ[+߁=uUtUɪmvӶq4N*]WJέ9C #v5|-\Qn'ܑ#=c=^=cag]gB#{u5>P(£dLM==I% 8xb6sz1nVu1cu9CU'ݚ4kX[̴M?7,ۆ|yA-qڟ!Cm 7')ӓ4ײ}_gP5C'apLyjkUOwF!iQvK>[]/g b 5([Ť9)xl)c EAVV1k|tia^P88е]%Oi=VۤSK2O7KAD%[xsn|hyBacMҘ791&'aD1o[sv?| >iY4 gϞDJs{{rsl05$!kRp`QK6ʙwxX+2n`Qj[a8wDOv)hhrr-e_!avL"8:yt-(1 nv Z*l3nY8}kߠS+л83xX~4|LEQnɐLo7_1QI] !m޽U l8~M(OW|s'7? pg_IÞiw7ž^Ld$}tn8Cmh=j0T?FRz_:pE[&t =w$ɝŠRVժYfx YKj밲;|:V\xz @_i6O\P|N-}:͗ Wlz;X=xjPtQT>bJV]vA!+m+;)`qVUyWMX7dJj'(3 05Oa6y~Ų򨕙J͐NzgF E^ gy爕$ժ6hnć]΂loˍy^u Ȝ`+3Όc9+Y]Āwoe@KVBt-NųΪˌ}sdƱI<]yCy}bb?L&W48TgcԇdF)}7w|{g#ĩ|;GP| %ܶS9mp ɻǶ*Nb(DŁ-SPj|.ڱhԉP)a/;Ów! 7YbKvO"W0{k|Up)rh8̆"?kƎ"cGQ7Q;STx|]JQ"%\FpɈa(&x-!$N\"'ބ>& 41m^'LˎzB"4#E~#QjD(#Jn5ҹ&2k{p(}L$ƚrƱZCT`qтaN'u&AH[C41#cL)I992%'_z(w.8eTrv\qڂ*/{:m MҖƘ3R#D9ccqG 8BQ ƈ1_jYS&VpD(Z󡟏: a;/1Q#r}eR9QpxOA:I08|mKƄ2 8E ,HŤ y,P-LoURj^1Ըw,w޺̪E%!D68%*AivZ}PK[`(G?c_Gf|^>٦Q6biMr,p- ԃ)8 -I4+r0QU`w޵5q#99ȸ_\-:c:}I5`lf)JKR]Rx,9.+ ׍F7wy|v$0:\j^]5'0GM!G0$QA3Rd$zkƅnnK}O(LA3П,m)DHN VNKp0Y*0h#\XO!ĉ15EG葂 *3[ v+\ ƼO6%޳ࠂ <.L:ASbFr#J%ɅfZ\r@iީ%]]$amӪ9'#dͶ0) Hq!BQ=x cţ@?a [&X)|)8a$N ̙y<_@LC3`DN;\Y I na1B R`;V|`^9<̋$Cc]i{.m)h=ڜZ>gkѨ{ps1EQ(ۛLگ9ezAq,e+zyۻ] 0<)U!z^5 ?S!AB#3!X[bVB;S* &{LN!8$/~\YpGۃ׹Hw#bHppB)=6!m]PB>A(׷Mۨb0%4F~TQsݽШSU+V|'^t$b\J07IPG³q_ehLԽ',@tAa{%?"Up>eFt[Xt $N;WmZ^v;T` zoFӫ^<ʩP?C i ؞'GW@@$P1e)Pynz9EM}D>[Ҽ!-y(IB?|DY@x{pewo]rR+<#whƽov^/&G8a_?-ĞI22jY1hyڟͼă,{9:>Ƥ@'.Y(m #9/qkT(IפܶLʝW0)whxR]VPEVfzUϺu=LAʱuE$8B$v7Zbɣoq͡NY7c 9MĄ!&op"rS#=:?YDGǕaښ ֈR٤jXI-a,GQ"w9"Yi_ (iQHGj'JB?.EZo Nzswj֓k%q>b*8`dMl@D B'>_ j<]G.|OFCqq |R7fRRuPG}w'hM}h4\uœ*\=Ȣ~(A_^n7;4iuOFk~O? d#}Z1 8',d@t1L[!|"7rW4^6_{A5z[ |qz;=Xv$fM2)5wQ[ "OcKwaeg-4.0|Q=,`,;vN4B>ŸJL_@ HҀKiR ' p3?'=Vӄ9c ?@gjS Eq`_ϡ\u u?Z㼧KDiMu TE\ulVW[1"w=H{"u>(s㘐h-g(wJP k,>5,C&:V{JM`b~[53np%pbqI=|=|({Z:+lU\9M1%1,5+Q`_ďB[JPKE+UYˏ}K sݧK`G!Q r->|x"VK6b[JZӛ~r%)wfu i1\dZ<E)Kj8[`B`Kt9 Z$\)#\*JB(>u.Q mMVJqJ-ao?_( ]z='m!Б 4 n9L-ˋ GQ=y).9jYɞba+NW}׿$ o4 , W=\ՃSazpQ/Wzz걇;Ql~-f* 3ڷffWq!@j¾7.B* [?C޼{> ^WM`Wz¨@sl=8#3拕xw;/gB/w&|9;+lq;=yp ue Ӝ jo]㸧Qu9"AX8Aw9Jc0Њ`a4hhj_ j#X2j%SJ,NZRD&QRJ?h]̮4Lh578]49p,-CoL `߱Ggz9_!eJ!v{a v3kCSM!YQd i0:UQߗw Ddznܿo~O?f:>܃ywB&'EJ;B*i%?,Z 8K>^(d^B_TD3HkubkLif] k" {xxNmiH]0T0 wHjj5gl͓~/b]rr_X3~&ƭϖ`Aӑjjq *ݾ{ŵzBXi9|H;;NgPr8B#"L6V"f,S! la1e I˅TRuETz9I""L%4bg3da;lc BAFP r1)"5b%nR e[`^Ke%̢ViVٵ8;ȗc)3Om9dj{hQ0O7gH@<lb=+BR,gWprSD{3W,_.xrcs7ƟK70F$uʀ>*{o[XXDs;E_uİ"" PNZ s 6I5t-Q -# @^,ARA[B "EF4jFH\EZ)Bx%$-9 c?ǚch}4NKZ?ǿ ɇwO^wsȹI$rnT0}ۂUXJ(Fޅ;QdIeFEB >_gȜ xCк|;YKt;F; s0?aXP4qVqBif   Z1NK*ʨvn2 ۷e˷^Է_'w)"r@Rl;`i#Gp:7֬ĩT*(u@GEzaƄU2O>J ʆ=yM@ָz\.O5’a8J|3%:P 6zNR=&% N(BJ=>6l=pFx)&R +G{=ϤTh:\/|$g5V85bR-N1O\l:y?|&Lf{m7!U龜KQ,msY<XFu$qS ;V}^l}j*gyo6hVx<1)lHA&{t zT) WQ|v}(tדfg?>`r3NrB :xڧr:\ۘGjvۘՋy5Y綍Uۘ|FE=1&r9\{.tK&tep۵ӸMp<1.3ݓ ~>OcOxUj#jӣj);33԰L3pD&<3!yk8+0q2cphQ?6 |}D7ӤLX N:1 OOk:APU0,ej -S󍀉rC!jl*m1e SGᢈV f’`NT-Z5j:pԚ;mS[[j`22":lhXRje)fQBSg10E1p%$LH 9W )Naԓ |Mڲ)Q)Q̗WBQUߧ@/Dߧn2~ʮQcZ[|rwqZ#Lt4\qf8Y/!" ;| _=8*QOKm[;ƥGP2 <)5(COhxmoR 0+&wm]MPߦ_&$Dӿ_,i03JIM߆Yy(]G#6c;TW E%ڭ-9C3h#Z(if|ڭݺ!o|S֓$+__rC Z9qO7+1]2 0oV!v]:ԂBs֏at|pna )"Z"e^j\o롷!U#a@7`ݪe0cpZnQqdO9ۂ+回PmGu:BG&$;аV1I;Z<%YV]p G t7)S~W?usѦUw7YV1ʄƷi=LK&w畽` F^00L #ZsڙIs%h;[v9 Q艄3Cj5زH; ԙCK; 8~?㱲بyY"r=80p0sY<}. Xm{K|k{b^'/v ĒŨ㾆łb@WwwjDWA0i{rea)ec/N9~1LbFPc\Osn,ZHh/VH\ /~ˆGU`|(6+b,/!~@edޏsubxFh)ƛDM&xS)k㍑caT(Ux1Uƺ虳IO D:pM$j>?ǦkXFvm"*!,gzط~x~m|0"Sz:s(0Y.G4oKF\>f1*PV9vUX͹CMUJ2_!tULҫCiC-Ւ }#i tJF-**0(LV 1b.D q)MzTp#fbF 3F'<zt3pc'}@&,LY"6Q\OifpJ(:rA~tu:#}gZ'Iʏg@zru%LR@UTia&9dF\`ի}3.PBh^~_o[)Ħ~W/+a+&ÅfSš[_kq0g'v u!;38W|EEbkۼOr*:.#dFOnmuŇk6jмg<viFRH6\ bEzܘb$² F], p*a{ hF,8je-[CΗ. )Hgo[{{+h O/O QH \{Y`!h~pz0=!a7ɏ "r߳'#&?뷩똜g9Q )Ξ2*+4dNK2QwFwfHsOk&[SVQ3XW,\7ENSDol2gd>IUm¹~QVȵ@|iw F|}Ep3Nr}C+Q'keA U5㍻XaHԮLY?]3=:xFdTC%5C_Ur֔ S؋:6 ًPTMږd޳.^>uW+fyoBPt6D]'EndՃObGKT:GZiy*ުNы,97Pz_8ȭU^/\]XbJԕtFwǑS"YzEZ Sd#ŒwtOdtMv,~ HZqskU ^åSH`܈-d"pweMWݩ% ĺM)heZ9%a#Yir{|Sеx7_X*H'L3m֦&gG<}sNJz ʩ-@ﲴev?[H;ibur7!Gb~C4z车׷r @);5'(?w|>MKm 8#v#2trT*.\}t!*Cz=Pq Z^tC~Q=A{ڙcBPtAP* FIhL&(iZnL(Bp0qP:ܷ>jPAtAْP:q)uيn1H)4<c:m^Nch Q_;>v1P"cA`fxj42I(W A<˜[CR2*힑 q'FHflbH% X#wjK: g#( ml`ggAtgDO(!x>PEOH*z*և C1륓v!:ʷÿ@ϥ` 8n[_j7_e.?Ԫ{z>-= Gb\n..?t/7ý<H-V&WE̦chi>a~3w~ R4g3[OlHiiK4>{cY+f/n?}F!tZ`38o#nl^~^|- 4u=Zz'd1B{u dƇ& )Lɻ*Hߐ|&eSX ~A:nB=zX BL'U[ Sdg-zޭ MM wNbJ ؽ]eP ~H};&Y=4aHrӬX)ȒL3=O4cYFaw "Ka!:Lr*O0P]g򙄿3 yiyw{!Z O3 eɇ׋3p;6D mStDFOWoj CXktq0t.FC41h"zR;u7Q"ZAa Y25R۝ZB  S_+U'ÚOʾdO Iwjk ,{ bU']kXUUFUVä\WoNlձwn z2^&8dwm*®m-R8DDK[a/xw3CJ9dv,BnS2&e*4Za'+ԩ ŕp?`'v ifJJE )CmR8"̍3 g1+m8q]sDZeT(BL56c(թA6L!0ʤ&RysD#L3BR 6XS HJ,0VIML8TCq@F9/D A/gÆb+2.bʷ+,s4i/촻݊FTghTa ӆ$]( d"`S; P>Q[ RWEI>&H窾(ؘ|12(,"R~nK[4Ӝ"֐LKY2ԑ9F9yTa5cu+%0sdT ƈCn5S59^so^ʷ)WA MQ.fwq~wBTԍ>=z r׷_srgSXPM?'pz\mtrsu~ 7wjnC3`]<"[F$l=g{swa!?!8)mUx,g.Bj)j)w6DdỻR@Jrb1w}F0S:+ɉhhINhMAz7QNA>wqtJn@ք|&Zۦ:io$8"YNi΍'.  gv/#,"GN{AGcH.z.Ivy̭3-o[9bkǯ+'\[g+#?>d}d4>9&` .UխJDM';W<7ktrKfYg׫YxpdOԈf &HgB"mi49zj-iU͞.H.[-|wX&5O(R&T+8 $ɔ;kIՀ!EFDԽT)ʢAۏ@H.O1l_O'/EOO$?ّYܢ᝽O9 V;k3>Ke G65ъp7L +TzkZ+=#3sm>ባFOuf䃁Ϸnj/ ˯wfdkkk˩;3\k>bEǹݝl?e棎EĦ-ՉGԄ|TvaMx5h6cYA0xƠ!/QKTc@=DHM@5m(}+stHLU_{a4<@#[STMhg["@|UwF=qlxD4veEu'+WljțZ堋O*oT hm2J f&s>ˊPeqRQ\Mn;rG`rx\*hPsvt/R3Z.2Wv=DڏH!axNm1!% c-gŠkzPkO[I6SvUH< }V?@Y08U/C)膛c3I&2$b RPA3(u]Ey&9WR-yO.Uf\^PF~a$$x8AH*˳G3T0, BGvcer[03$%ҨsL{J7{Rahb+1_%)ĢCxՎX}-Dv7i<Eͱkּ<šmhZ)^CIk (z}gﶥ#EE^:z᧏rrs*J bE1aBM'Az)饯*ܻ$EO^(}襞GTȐl5&&40i({D 舺J)40{Xg+Sʼn5!z =[|-kSrD [%΋4lNJ;/EI{#h8cknK[~ 0`A0RB*w3fSk6/?1=|"BOp1=3CV(8T NZ!/@`gB/", hb}lU񮠍nQw*Q23 A/|&Qa һpCn'3ЯZܫRhpko&?+!ڝ"joKZ)q%lAݯF,.z͔@~vjKIJy܋qsMݳ.<#Cχ*aI=fvllq`IȭZ4csoR(-=PW%x]#"ZB_=-vre!ج5~B}N.|XǣKIk^[8x9JX/9hW?͏C wg"*c@}J69lc^4g3!B^>KZ5dݹkKtqgմ%(j wIiK:V俩%vv|'v'+ # &-\hĶF=fd5g_ѕ4 CP#%J*akJk𵂶8B>?T?HSs HL{ |oB# mӈw01Cpt7Ί@DИ0" =m`G%VQd;6.%Dxk&a5 @ @%@#MT"9 C bBr* U\gPj{ "Tg !' @C9[|pW Ce3 *;5URr&" In{t,bwU/m#b`l/sƭ n@AP¼:em~^ܾM>KUgx9AL}$p(|/ J]"E BX@>=ǘpzip†1rjb)v P\]!%q5fFEUF23c4Ujg/rK$ J-6)f*ђRW-/|^]n4EQh,>zC-| '7n~b$F &a+N石4b :)MsG^?R*齛IEzeiyufLXI笂BiEy )N"G4޶su2{sy8b܌E:M!,L8)qDHPtB(!1(=Cxn(oP✉LZ2RKfРr2J) F23%38@s(2\,"Yz8#ޒ JdM9΀0ZcL)d1Df`@8M%^Ịb֋J#+,Xc}ƳyϵFhg/(Ì YeftZ E nl a&g崇 & Gg?;/\X?[ɝK e=y Y\Jph,?'|4?QDԟԬ o@8|r-aĢg{628d@jKEvJw^w*3aVT4RbQ6QKS4UΚk`D^\B0O!ܭG] }Řcm 'Աjkģ9i1I~!-QatڊH!Y ć; ;g|8wP0T^C9ކNDET^%zԮXbJ){+0I5("- Qi'8}p$,g}"1ۛc(Y|TWz! c|l$H!r$y_)S̡miH~ѳW#h21M.˟ #X7YV>KIH\/Ss3/Lf" !. 2bC@XeP&h G*V Śԍ"a#zTXX.e2i`Mx#{uxfy♭ `Hfb $'5MF_CTYP(E,;*1DQ+4kN"R띮ߏWEU̮9Zպq8%} ד@T˴_ށe7ɂvգXNJIiH ҟ:La- QV+H~!|C07^t݃mpK&m+p3%y|כo~>y{yzѮyd?{xaX"R ODRlO.jxJQ)np ]teeh ON~5saߛGr_E$w.o.NӅi;Mr 7MDjUDk"tV H38$(ϝ%:ki>k%lv57[4w$S,5 M|1d!lչŃfl9N1L8/̯^O )o TpTS7pR1=*cc&VCߍ:H iJ i=5sQÎ3+eފ `XIVa$SLDK*,\!NiݛS:p#x84`C v1DZZ"ɠ"Ne ~'Aplj~OۗT % O(rvDtrn;ioGAh'0?1 9i843!tD}$MZ,-bDDI~8nJ-:[Myj +FNHvV>I<dk_ >+k*,l[ qь`,J\* ."Н%c) ]4*+piHpH0j!! 8 &Qu" &%ZKkvSG1d_UCJ V~[@oL Wdݗ[OejXoA> H+:m{x OƜ{`O9_y4@d>tk@=^ o?}na"ducmy;|ȳaK9Ӕ%Z{y$VK#B  H8hGm¼J+f="${D Ac=Õ0ap<;*|zpb7Lj"stvn2()xtɘ,2T=vr3{l_̾Tr ًOf¾X "]m*7b5vވ{8@o;#'6h7p@F' fq8U7o DdA} X'Fcj1 &Ć1ʠL(]eS4Em`#0 KcS ||v}3_|rq/w6Ϯ`r\n*ߵW|)Vl`/6hVd̥ ֵAܔh$?*اe%%;/ 8LQ;N&G& N4?%hWgW09?KsrKfC Jڙtރ[s䀗YGuu)qX5ZW^-!1fݜ8F0 Z'dN]e=1Цumt]%Qyny7PnWڼ_[*YMmƖ*(!P:5EUZVMo.]Z>;XxhLs9+30 ^ZElatrlNWK^!XT+x4,KWhQ*ZZCORKf{]-[y;a:TvXtˈRc>Mv;!;UpySp\*MH/ :(g;8a Fvdm) !:h\%;h EHTiA32ͧDo\vRh%nh5<)X0AIE nǦB<6sI[?>wOjrcvg;^&,/YR&FndѴ/4XUF6ӫ~Q>ٹsiT4Y 풴/+i=7[fK(870_#PcKM*IT!j`Vqqs彃Vv'{4 栃DZwToff!az>aA ݉v?>?N]o|Q֌w]gf[֙VwI>Xo|h.e ?&TY@v&XْΗ4YJ󑓪eQьu2{C;iZ=;Qߢ-EL*g!;;GRG3yު}N.1ش-rlQMayRO|ONv3JJUQĐ&;hDnH`âpux7wg@ڪ2`$W(WyNL!ӊ,z#'B јh\7HA0߀*݆j*yONgm;sxjÂ+JӋStzZ3Eg-p$kxˑg rN1n;ԍk_Uw%}u:>:E1–]'[δqxQH氙wZ/p0k=C녵>z=P :=*0TX R| s{cTa;US(*jPQrk V+zR>E$e."{({4 OsM-TA}zxzOKzc0w|[,ޤ].stϟM0e^//tWᗛl0ݒʵ~[J~*eDӿ>oK_R'F{T. Ӆ!Eąu݌DiZF Uo[nmx;w}H"oabϊJdV[L՛HSXa$Δ| L]vo٩hIc5#?ih5az ŝx^9,io!rqd"&6GQG81DUbiICPaΖk؎¼ i-.HXlx'-?| 璷7jn2vu4q^ģ R%E(ؘFHauԦQ,Iuir{3 5g-3v>S8Y(K`K+Y̼y>a"$pE+-Y)NCd9o >eNɁٝRT *c홊T*/yq?} B4ۻ '{Z )gyW❍u.9 ". J'ef̯Տ.g+OҨ0ǁhk!ZFǝ=;x`.5G9xg_i{(hձy!5g=  4o>"2LPAHlIbg`7up$r4/i"%0R{/ fx;4E0l3x5mxM +Mys±H%0̘AQԖ[bPC, VYc B0s <<TL Yitsea/'ecOMي҂w2*rDI"&$iUdVkmƑLvQ&Zrsnn"i[ -A[ 00Ok[ш F+}ٕf_>K0, 듴5j%6jHlm$mˣsņ)$ix2VzL[ W-yhxrI_#ԩ OǫNPYɪֱdU'} m]|1:doͶuW{uvE(dBWT*_@gk" "~!Q͢$n /ZZ <,,*ASe=wJ*fH7 RA!Yy`;-8FDG1piP42l06ZK<;i%09:` S@M-5y [0_(trD=*iRcYMkRis?gO e+o~Jyӣy1?bnV^> >W|Ǜ{LϤ5N-)Gǯ.Fi:[y=|}>cW?<,>=_ͶH&ڬ=c"pL= |[^jP0b L5pے4kMoYt'IӞiƇ؍$F~X72M%! 7-RjҟDʲDlE-J*aTaXIK`vkIF$i^ ~CJ%IbQ|Ip3BP*,1`>x1-bM1/JQ1&%N 3o`fNA9 n\K6m'AJCVٟ(Q,HNSH U)}N gq96J9;Bcݔ̛]Oӧ(7R2,'GQ3p]$Թ0CSDyύ@RWrqhqw¿z/Zo]aXkU]O'M/`(k=_tLgI0~ Ѥx O GzQFUC* ,nwRْjˀ|H T@CoR$kY"F[9o%J_yTRUf"eIRhqr[N#G!+Ti ҙ9ɦ5h gl#!;gyq61ӆ;^F%32TISB-q΃tLjɤb3P8|1R ^i֑i!3ː!5Oi Up^`.LNe@CR.R q|S&jT0uFf%gf{eajas&ԀFģi,cjQ.]Jib8u:6=k2L'3#x@\׃|gBTޮgiɰF‰DSGv|M7Z &Rߧ:WpۜDK-4yKt$0źe*rQEڃ{,MVnis$*˸R wGUFi5}>,~?x*j4F"s{1ehony<0[G\]^rW\~ v駋4~b.tV/?$-c"ҼU=^7Ek"y\{bi9]_|ژ߂-j*z|$ǜLlјJLFCq#o뛗~i@X2s5s4;TRs5יW?Xv~Q_Iֳ8Эؾ<2$-7i6l+7=Matf)Wmiy,'2Z ̭1^i{74knԉڮhgؚh-7T-z38c03Ɖ.=038Mvkv_^kiSmNzy >Q>0ۃɫǛwFr^P9@{|C^ҚqH`&[ [Cau1zט@Qp!7zSש㦹 y2K}(0bHIDwlQӽ݌Dj0%6 JբPâ Q>IkW[HHxr= Vab$A5@(,RwiѴ뼺3ph%oW U10-T$m # ʓS J+@兯MɏE Qy0[Vn/a˜k6]U k/.$Z ZſЍoYa:*W[^s|q1-@/UE\êl5N>)yњ"I򾮏?kb,r:3vYج$qRs4uR/o?]ay4|2dlcHTn15jȩ 0[t[kN8d><N$i.`rf F -yexOYL(aF3^'AfCIz-[S)τc+P,(T*_{ [F.fiyq?![ס2'9ahc=DNI` Wh.SM<6ĨPrLi  "X WP'DBMp!9@ZHIK8KD{#,  tnFUD^7WZ(OTS&IqL ĀeK $bLzBM^d`I-/#кdF70mdEIGEUZeU +Y#q _zJ)ʼNԣ6Aq^z}-ft1l7ӁJq"*L6&\ށu"F$B39fz#ɍ_)0RM2<YwB֎ZjJ3,ԕ"#`Ƀ lLunPA vndȌSDDv&KPO|Dʷ%4|ɲWBtO/>ݺ#0n;ih_:6jeTM[ƫ _8S2?]_EBf),H_ϋ) )o.NZkdTyL"2HME*So+Qyx*ܢeƭ1ȭ'FB5VXU_Z8;z-m!T[ŅOdU8[QC>5xݼV0Qt7%pѲݍ z(aFPmZ}eٔ^E / N;i=( X/C*k0%({Kf Px nt(SI6).>3ӗl>`axRwF#KjV+Miћ"W;K5# .Y,O\ğ2D{KﲆIty8O~i)e̩N/]bםSߍ1v/Kes玽[Kb~dM5k5o\[5L=WOreVݧPiTnnӆR%qý%;U-Wkr-?%TBYVIp2cEw8ֵ6q\`Ecٱ",dpɅ"ל0./u'Dm%_].cEڎy27,bLwR`TqLqР-b|DX6o<$tʮy[0pr^Z2-Zy1CzGmϝȎJP5}w;SN_kƛox3y}Op7'[#[ž6E` 4 w[:HZ[#:hk(x<`V?5I'KhT6gpϳn=ĸ́6|wuNblyҜ>_c|b'|Ϻ̺̺̺&FVF{PxaL#D>z :` rn>?.sؿ?N]“g}li9L55fq9I9gϱ1yD_VePb%nVb-q) {Ğ1kr[5HkP fnVPD{.5>kзzj)W̔cf&?-y wnsc{f-_}{o;Tw#Ȑc_ޙjdF\^\]Bd^F`)΀ 稃 vM!c=VwkWѸ ÓD 5rQh&4!/s3}1a ߼&cUL/dKyB9D`M@A@K=\/۷ax*>$FX񄵄ч+3G^Pe|_5bA icŅSϼF!nSw3&WXt/}s^+s]"1j:IWy Z j%35?k!Waī_LnM/6JT7ޓO"6lL{U=A%s@X{ }/i  _Adh|eu.CØXZ%PՆ9V Q&&+`%(}%K3{;t?وè8DDv]Dт0/!pd]yZTs"dC3>V/>+L1-|9܏{~9{~ s.o~蕅 \?_:N 'k.`O 5Zr~(= 2S- w' /Bv%1^yKgVXuIHx9W8Ccػ=Gpd1ƽwz:DP53-`1f݂3;7ܶ҈AǍv=hdr ot6k R狐#aގxr1V H+:˱يnety7BO _KKse}Wđz?]%_Y͖}4S28"&bb'|Y^,|YzU\Kڇ\rh 0`0F>EҀ'S!ԁ>?.avz}M/;κNa^gW)B7,.g#b9l96&|yv:OsF[l+V m$yROmz_@ $_>DGnB/0 ѻOh -rii8D[%-w9=m%|VUqV^h%@Z`d*+dԾ2eNX/JAaлZ1ɦ\jPȤ푋#3=hX_ͧV|*wشa4`jDVx[P&mW M9>5G["3q:wkeÁXpcG+Vth!m zaaYEH&AO枣t"0N7]8cSkr2;iƿh'*ƈ_jirbɺhO3%Wud.l* QF甌*Z2ÀZ3B%"Ѐ^0+ K{" x1_/?}ʻ\2~kTy`Mhj$%)ygժ6(/q*ďV'4[uS*lU=oJOg;Il8V֞DZ{M~H rb]ԜCNk3-drJD:9Ј]Z氓Y;2TѐtT<o)fwsp Iy-Q\[ >6uF()`%-GJc1ާё| Fy-5ɗ "(4CB1S-l7yjy{f-=h&B-jGIrO^JI II[S&d*c#-׺$uN.3pgX6Q.b"w Y 8fU䱀D > .:Nu 2/hU1|bz*,$P"2Z$֗.o 1D)?*DWٴ>ZLIYN: %[9OТ-~|w@o3fvvQg66yO\!@wBX(`)P;v5 hBL`դɘ*, #ҩzϫ]- ,?VIcx,܆Hn~bIF >?|97$u4헯 yoޑp=_q7Dw)g}XmE=_|w]X3h#N(%xٗ° w{Ap\3&I)w_@r" =O:a'h8v]1efI1YA̬?aقdkuy.αiu<˕IkIQ A5CZX։ie87me5jy8粦 ep#CJv08 uЁ[25o 09-<ɒ(V|%=~ަ~ ͚W@~f3=2>*iMZ0w`<yb.5].h2޾ilTq!+U֎AS᜴Z/Ux861_ Аy`G1,b M-DHN[T&wk-jd=fP`Yw,_×ݏUL %Eϔl!+I 9(U aq.kRl"2+^ça,OѰ':}^qX>n3 r IsoyuX߾럗XVp@Iϓ7I'Gt@!!3TFAKDju')F*"m?&[F$|L`iH>0 :66@YvJqeޒ?VL:1 S4ąmu %Y$sbPZ -*iI,Y@|!DIjRpN-eRoQ;fe,6x)cx$m WdIIMH:[֜&@KB% 0e`] ZaOΠ*=\ـsΚQ#9+V֨Ȋ]ݪ%j^$b@cVb-1V6'B2Ζ?w8D8v)+3NUVhii3EbArՊW\.o 3=1p 2 VAˤ2rR[5rm'gI26fFά`V5r3$ςkV]7&).bfLPbvsl02 6MV:Y0%L/d[Mi^H/ZZHd:kD`>fZ];b&W:t1qI#_wX%| =4g );XN.)9lK1eR IbY,>U,b=.0ߓ' v㞆Θ^R7%AȚ<szDn̽4;2\D%bf>kRd=0=Ro\L * ٻ@Du[Q~0 PC[gw=M2Fkם/[ qo؎cpԣ9tH@7{g)uY5{۬Jg~Ujş'՗r[=)s\b<5*i5JѸ U bDZRWV y $hȨw̜֜㑧jlBH -Y:#Zl+NhAP[8gPdصԩFjv:4ڽ\]$XݞkBez)Y:VW"bIQ]x7S0ko&_GL.?Ns̭8n6/rq׷t?ĶQI;t_E',NJ8z-MKoQzam?O>]stze=Śa]TY]Ld!/Dlvz0bϻI([,!FvK`-> n),䅛hMxvSlDAki٣뫒x;H0 -rȼMAsH3₩Bd$ ڍ&]6m~;{KQzG 6w٥FMڈqt4пke*%q -{| =zd'`'T9#gO#N 3l/M `p|ߤ P2oy#&5x>:X_-M/@i%K23cW2_s9p_'?my#Jd'ӺuMƛMݴVo0Af4*+ uGHr/缳Tj'"a |ߟ%;aQOlEmzʜ%dx*gcYэm"P7BJHX.7ubI8o*(}?~{Ճ_/iZ~~sqEͯjdAVm9yKx9Z7%%TR+O?݈IvugMTk1{5ߌ&3qʹ>lj2)S{,;R'Z*8_oˇ+%pklt'Bj:YeM\f>+n"71 KPJx_v)ԏ7Os=ic02T3p!$Җ^AWdeR4s1jtbJoo݈"M.!ĢG:Cp RFH~SKȷj91a3tExDPA6+[ !nV#DXo ϛ »B엖mx|Ez$T4plFǛy0Q l#QR(Ә; g8%-FV!uT h~WY:b~BcO C<9Ӡquo(Yļϝg@4fݶóN@x:$a;!:?tm@|ie 7hĐ-Jp= FH r+6I ]JhGHce4G%zڎJ"#CN(U&sh,ֺTZ7$BK-?VBX(eTX)+Lc [3n(dWuc wyeW9sgy}A&A@^ b دbp#UH)zM:(AXB F?UO¶VAUb |G9J- c n&.}:͈ȈB^Fٔjn]#nNl] p8莆zMxakP ~ݔY/ΌGf"("RpR[Si!\Bqc ,] A\3T3,P(L#Sp`9liiX`tQD[(P㼠T!!b7+J%"* PQ zZFfw|s5Iׯ IpO q-]*K}I7(\)ݏ?Ā| UiA0x6}vm'ɘX~r`y.ln(xMٌstt8sw[W`NEP)iuӪWSç"8p].8qQ?䋈7%c[JSo$H0hQhp[!1@Ipx{.HP(C팹ègOR1 HçȈBMh/w|78S13ĶS.OϧSb*QўRE{BExLL"8X [-mr"W')Ъ$iI x*Sqhe>`Z)* zxd<+eR賈3W`gRKFaj!ܑS !I`⎜(4)DUH/.5@{FOPuU(p|~Pz*Qk0AZYeƞ[O{X~yz(E;z ## 1~怏j#6ƍa+zc9]d;861feb}O+#&YzA۹^<ܶ +[}n bU&r=!ŒD{x{xӃn:hT8v =Jbd*xsfSU!@6K5,s/[Es6+08e1Se-͝Ǫjq`݌X~9gyUAIR xizҠ ttd/be:j܋ǩcg.vKn@'~*j!IUM Fhwu+4YJl'x~74hf#qf\t=/5KKеqcF3NŎ$AhrBxmA5 7KZĹA N75$_'sCq PT$gX/vL%,e}S v2#ngK;iY2" 1НXDl]NN:fAѝğҝnQ67Fлbb:mtn'<8wϱn),䅛hMAq~{ }_#A%s9uNxh-e;@>ݐI JЖq\jNltLv26ӮIv%=t;Ƃ ?m;}eu&mu "IfZ&L[d0b 4:F3X !.k3mN Vlw-^Pby/ATΡRiȑ7;F+!2a?:0Ya8TkzTWSFȫەhisB:sӹ-owB4>Z8!"pꈻh՘F NFd@v &>DE#j (=)u==b=a:sG|$%l9M{j4qpy@εTd]ėCs>MrRWm/RAT2 }8~](H _֔Xgw#@Mm1µl#r( LcBKDhp)4y}xd5t0nysIdŴH0E:7*a % u L.FN5]Dڤ1W%Ӽs;nzh\;qܶiyOjTF:'G԰7[^M__]TB18!ZR F 5XZXA\9B_j}Qu}ٷ7x1_"D6&M1+yC[1S$8Cm -C\H<ǔ mnF/w]xGUhn슓/I k#Zn_ɑ0phqŲ =Ot7ЍTKᗹ2TU% Fq J8k[*` Tj!$ę DL\ 6k]%#-LcPX)Lա Z`MrXrV. J=C4EkJ 9RcbHfS9U:$U4&,@11zV'uPO< w.|H?ߜB4ww'{6oN)[POk.tq6GS uFO?_>| [Yj);ڀ"Znp a ]i}C'&?$^W~᩼9IB^/~׿siY<__D.{\<~{rO63̠2N&ٹ6{5;&L'W PzAlTd5JøGz7;-S3Ekyiפ $LmIdV=JUR3iƉ*y/ש׃uyw/1 }'~ gx?=YRS(I>+wq6ܠM&]V$++ F~eHXaQA-%{od/(Mn<_AeI03ҌVI@$p'<#¥,,[ױ9A;>e2Uk@9p9OBPZ>*@!  Pi~%<]/ Cmݨ^@+SP( R DH5#FEl2c U3~RXQ=)JZE솴+T?O8.w^~7t=woVd3&Md&./@ϓln/qd4RRj@O7VV2뀵uK1)RWFںtR VGϬ؅|"Z$S5[?7Պp*TVvϣFZ39/]>t0.nm=ô Pþ^O5ܰ+-VA],]v@5 X uBBp-)FGNƠJ15hCY0Z ݊4Wu!!_/SP.9~63A-7$HZܾKAE~I kr.̈ҥ2zG(ղ7l=iݣ1K.M2+ܪ,{h.$EO'e&C5QA3{Rsp3/cu5pssǙGDS t@~Lv~9zp&c@J*@aSp;dP(APhH:(5D1,؊f&/j i5ﳽ(7=!|E%uw;,T4OI{Ҷ{AI÷TEt.[5]^mQA: ~n/(_Tz2vs @&UZ^|E5ݏ|b.ί0`FtV}`5c,;GƠ/c!}cJ5c.~s:|'U4\܍u@ఇ w\?xxr!\<(\Eq$oIAY1r@2zWL ƻB 2Qc: ]7*Tivge ʓQMYSMFgf~sfm6kAþ].H;1b=#LndžvǎY71b&(G`OI|$T'Y_E&%=9VUsHW}_ɂ <ɧ@h}u d~JB,AMf=G`jYDWY&l&&Zrl2w twtd,Y,wTiY6NyњɤHe G\U$pw $K[hUJ>ڿDqs1k-V[b )pc#4 VE(t[- Z!F!VzVQEZ.rTVoV<+.U7Cظ 2RQh(, 6qm%od-a0%9c H&L)+5|&DtwkTR=^z UK9*(g0wTʝLb#={Va }1x0;A;N*b5GՌp _@%Xi]kPšH-9nÀڗX"R3#nBof]<mׄ~;&o$|SLQh\sm%#?{9GimŧVeRmoV./i:߫SRT ɜPכdӉ&k? <_ζS&N8ktݷ׳="pw9_ _7 O'ׯ^Q2cTȌR ktP҈T _(z@36KWu}@&7{b_ۻo7 t}ĸC5q3^R}GoDh&ts-1*Mʤ% AHl:Y#63rA>oK(k4: zG}pURC|Rjm cz~/"k:]#Y `/p{6QmɃǫ~4 NeJS/;h;Lp';I^ Λ JOfSƂ*z^冠lO+SA!<$FQ3A&+Vm s޷7VyJ{B,p7Z R "oXJ9 ^#;-WOμzv1x @Hܙۉ2p9#pFk~5?-#G, kΧ;T/V J]Rszp0Bthd0ΏVN?> 5<޴ܾR8 S>Ln ?}a]T, g7U%SˉƱ1.YstnU=a$LWhVP=W5 GpކdLed\BV'>fFd|ʔ!S]5TP wTZ¯6S{aLTȫ (GYޤ8=TZEqͬUQTxd`[l6 3ܪ !L$1BTR@d DH:B;D(9;GwN*Z*(X+YJk ,YY@Gf: $F.Qe(E cn81s㚊I.E0FX(*ԌUiEw&J2p@P;Z'd3= g9K>YA>HޑB(w^K-ꔧ1(*ZJ .p 80MJzG ;*]|5^O_)CxR~p(ʍFVPeK|jȁ4RTTx*2tK݂^RJЅ4<<$(O.@ z搉I:V=2Ո*W^sUrfYYBxki\za5gԃpK((B9RAU0[5Tէ@4UJr VZdT}j;i_4O)p3à2|zF)цqw'R/nV 1[#hE[TZKޝJRBNO0})96h ޒOv'ijE iZ!ȒzwO@ xoЮ=2};&3Uߢ%mS>5F yo/n3a1cƲ7Lh4F#Vް5F(6sɓ41" ]b_G~U͊CU͊M5+~N3T106nJ#G έǥ-Չ+̮=8wU&k{.h#>e ?Ǥv!금Qqhs̫sm GD-+.A0WD t^kc>1qקnJmގ4 mq b=9[@RAbr-_瀧jQKOy n*$1N DnUj#W e$Lj!Xqf2` {kQ -=i5JmֹZ4Sn`;4M# T(Q3BQ1^.,Ĉ0RN> M$&h&P%wܕ&jť[U,fv J8C_XiLGmAFǓ?[O%PGKX㽮$ܫUjXKTE\ 5>JR1X"ܴc٦ k<sR/zs1"MQzG&;;ƈv`ϸM$!u7qNަ~u587$c.ht87hTk^2q|I t.$e㛰n<noߜ?{6J/ّ%U~J2SSgg̾$"A)[Jg-H$/D׍F7З3dV0Kz.1Kjܼu>{cCI]'1rؗ' >2j9I0/$ϵKf^x;M13()^;3$I+Q/r+i7TKF$o>Z> "YUȲyD;>curYFAuiz@y;@ bQ> VWH/`x+мȆzvϊTbi/R$ i&UwZܹm,rǬt4ك}t<I(:^^a'$ ؞gs(v3,z{3 D:&QK)E!D^/mYB4nxImfZunWsA)P4T!w67J.*Z^ dS 5/<Ʈ|-U;`-X+6'c5 $$Q/Jw" ʼB ax [Aj{c* qKo:~@ @F U[jQ @B;EGucAF1ضCINt1 lJ؆W.^2\'uhXIa,zAAvkޚv9URW.^2D3ty}UMq'ĤCAvk kMx(BH+Q/v/~1*y\7zw,W.e6g;wE-MP3ԯ+R5P]NB/1x_AGElj+ nJ_KLpEeS&ݷ\ ]Խ=IJOzCqV`GXTsAVtsà:  K+̾zO1|gb[[0@@hGFIn;K} 9d] 0 *qF\Ñ*TKw¦g:?%C}lELc X2!ē(ɄFqN! $=H{tpUcx$D29upa\q(W7GƸHD;\3_1纜+8%$>D ]( {|-`J{6t[ vs g{qŰUdmlct7앀-ƁK)%&U9R:d03ns3OIR ԩо-!_F?g,FDkuM{F@\Aetu7S忓X:5_>g{}goޠD+DJFxH*HBs+ "\@Q 58AJi)+yfeY`;7Fm2* {vVkyf൥FhUVؗ (hb)U}}Y’h)!,,:{6ʍWnrcw_qՐTc"S6#"(R$b1'J EE qhAr-J!߁Ԑ%,j(ZD*f;<>ˇٿnyp*z_WOn_m2Sf䉛TfZ|7O^Lei(-)N0{(`kBHFi yB4'P),FáuLW/Jc6R(8%)BBH)t %䐊SCEa*PN@ye$dpccϛ ́!D6i(y}P2$>_gq!/޿{y`Rguf: @20Fvl6jǻ;A_w$3Mha6_s%6;#{N0oC>E7M_cK 0z]rLQq<%hg(z=xS"d>!PiʘnT sR@@ڏ@&*I& %en$ƚhvcWllЌ2%LHcH0@ PWAlQB.|s.(?F mDIb  aDH~wHqI2(Y|u51_'4I?'hJm8$Ic9=uBb nWt}@XJI@JbI!t.E Ux)Ґǡ<>z(ǡ_>ko5t=YWDzs?#\<0ݬyg-r9RX3 @yf"R`U i*,}*A6Un0q#쩳| ɕ1OǑ?N闣1XJCe#iޅx#C O 1<81c 998 dӈ J'nH%8d@}RRL2P=ɤ|21%anB2Zmpk%ba  b aIhI2P $ ǀcE뷋D_?Nrj7d fn07pIK&쳒\HSX"eA hy-ر څQ/bž4u>9<Ҩ)qY[RJ6@Lx@*{ayHْM1T*4w+7% -]gY~"R4]FJ0hm_&LلMH&i|Aeˋ-m+7WWF6F'A)0r %2JE$ 2,!9BC#(|*\ct>]aJ4Wi$R%eY2Q 5)%͢!i"q BON_R6¥j1_b4B"XprQqj E)0MpKɘ۹O@Ҟqe'<ʭsk;tb24jz~lzs(seOb2N=Ú?9l0}WtpqB*f$&"xhL쇈dVAI\ tp4$&c!n `5:=nm,uMуf _V˱ hLvv "I%!4|΍ɬ9%|5+F:Z !W/ /! $twh0}N [Z *$.(f ކT!~5 Uՠ KTₔk"`{TRfyR/_<C[{T˒T)9}%+QS! d:GV I?Y[Mקڈ.i"nwx)&iٿƣ^Ov2=&yKSz.ˤy6wpc'mjv;rC: Ρ]W8C :aN  ;{LH/N9 p\ ܶvŤl?/8y-$Uh^,t))– d0 ,YԱﺙV͟R;r)z|1bW5Tj̑ӓi'L|7N+ã-=~Frq?ZVxt|Js8H ˵1FA$FսTJm 7DZU*xB@OYC>UY a" SeUè-{ZDFZ) r{ DwP |"N5&_#Jd3ަ׌},[l\7TzIo'@(#=Ԉ4.}A~(7Dl6ebn*6F`LQJ#\^c![]X`(F^)xKFFr%N`go >%].'wr;7x#jݛqS׶5p_(IכY(D~k(֏f&9#wsCVWm}KO.JVVUI+Q/M:'j7_ ʈU[wQ4#KjEC{nX&ӒCAv.~ִܧrvd8ۈu:ǭb>ۘ #KǾӶtGc>{M]HHQQ-~u-&0T@/G,xuFp9aYH{ n+mnqJދ=:u_,8{ӎ( @}&)0sœRmc?߹џl>[}%G]N1Nc-Kb4B"(\D*kAb qPH D"6*t s | `w"_:=||t|ٻ6r$mp ~lN`+^[Xc[V,XU$'/GE2j|T/!# (GX\_ _1h'tmaD2KhDo(=.ϑXs diM'!OH%p)kMX Һkm(U:.a!#u }adV# W eyʫ?}xZˊPr8QC+t(6"G8ALZEju*qRKJEw٠42 y""S -& }nĈNju'nMHRYP\u(?ǿwPj+[YzNV?\]Ԧ W(O7wfln}FK ~M=̕ veG5wa†r,`fDS-@wEr/נJd2R8`FerČJ`1J?+oU~PvcŎr}r帻&60TVCs 0A~xEp̟r~T\'[vnd+NFYs] x!U +i(|i)}xGCHJX}R3<:<Ϙ.XۜeV'Dq%rhvr72мE!JO%3[.>I8 K]7, ]{hHž?0hE/D)*h{sޚ<r9Y9ⲭD'f}+B㶊 d!zS{ZW zsD( 4 , }W^}) :#xi",U ͸:•jw gd-x*˳yPWoF_&󩱅~BDR s}Pz5G:7QUy*' e ?L:;C.Рe!@\%|~2t;^ڛ0ko¬ :k-}xh -S9!Y.GRbYPqRqWPJ?s]"}|}eZ iIqOp3VqP0& V@ DZN1ˌ fү -gC* eYdA$21@D"E9ׅ؉+\% dx1X(XbB#f2Y2PO3%fMƮΨ ߳;t&o-XH.QKSDE"I !8c8ȶ^eV½tevX03vfL 1x2볂XY;ŴԂc>^_>_F1gOyQ&QLzqurH5~<N49Ђp 7NJoOPZ;= (o #} y ϝߠa[ѣ Y_ޮ¡Hs1,Pw?w'IL {$1x!aqL"i'(WuxcOrrE@)AHᅳE04' jA2:h +:RAINH! CR+7֒J StӰ"v$k-t4w \+GQ;2R*X3fZ㵂K󐕛p*2nU/Kȇg&e}CPSsρ Y5&/ lP '9쟞F#P_g.ʯBW3#3892cx>0" ,?= D  >|otFa$8pE@'O1F &L7e .Me련Ѫ O̼sYP9s!+,1F%v!4R9'z]1F Kdn WcNʍ48Z_Fy s [Z tY_Z#>76ϙL;Y\( c sO=^YKRmS&/u]ݓX *ܰcY`;ۧrhx4;GJ]%$5 Ia%$ ШAK`(^"?c Ji#)`;V(WK, :\`Ί\peSy"tVZv|Ux0iQ&ytSX8NyVdtk+cfǻ?lAL̇+1qo] [ PӴ'm<;Ȕtv:dc{3ɖ@oXTZ5ɿ4'Bj=nZm⯶װrXntw5nրnmzFFCM(Ams;ڢk|"!3>{{ٵX\̣HTn$[7Og368/Bx"&q|B]Ϙ7{|/owάyG`+[[gּW%b?ڊU`h6n@x2޶ S>kpqњHp9R\qһZ{6+83`cj ̥~["iikqbZؗ&"/-=6hT4sLR"pcB#G\Ձ`*_UzPqi,X ?׃ٵsC<#B|tv#8OӇ'`j 9GX N9 B'<9t$p:T&Z#T2'L qIsS0ZcG5-tScElUգwH $h'!MHs8r~Npl$nj"9pT2zls/wβE1i6f9U3ǐ[vη4tB.8EbGȳgUsK"\kHS/H-pKmD(tY O?gnwA^o4~(1WtYijV`R*/;p5F@R4wTZDi"T"e;ˆF{r㓋S,E[JN`+5T$ЧQy8K ir:ϫ.9Dn6<޺ ^LcE2-cV8%UsHAa SHBwAg4 5ALP c' mL `HL<& cS܁T]ʀ؂SȘsfB<7D2lm˾#HQrhG 2^.)h]stn0A T׋~53%VFp%G JqO4PKSC̘qd!_z\nC.r{JpDM+Zr_pD=Xtk f4 {,0 5tz@q_ PM_([JN˜g%y K T+X#JS~$;iH+q2^u= =Atsgvv0ZF\O(|h>e*7Y4#p*){sVţg1@nӗk?dx;qJ`Hi V 8 V:* c̋z`M2ei(RQpƃJuuS2ݲ@U9ҾjY LQ'aJ47czZrQ]YuV@UW_>] PQ]#o;TUrv?{RhGIojMOJ4e%Oy 2c%O݈sZB %=qww~_vFFvɾy«RٹϽ!"WOfz !2rJS^!V:X)$+ H Y5|4dy1ȥ˕V(aGܦakŠbdf!*GJs#%s!{BɐrU(QPGO7XzlLjy+432/]W/Ii~ mVȗӢG ߇ ).NOł;Im'ۢ(S%ˎE7k+p3gy%r-e<;SlCѯ?|bEwg[F]ʿ3ſV] BvŕonmyS!~{4}Ts(ch>Ev.yx4|0Tvc ^On9Y=85ӞŃ]Lj !F\n>ygd)O[gW]*'%̹{vn~yq>GXK / ,IGhݍ&LΒL؍srnn"o+[u@nK}tiӯuY-0NKkݘ[!Yi%uc! "ґ:MZւ8\1ƠY D9M)aA 0rp/tRh3uEaHOv }D}$Uףן袣ՑĆ%{}ʮ"1+rzwaVҳDqNHk`A\.$|)빴SYdM@saŸ5&ύ DYnYk6=߸ҜaaSi,-\w-nRch.ci:Jr^\?l%GTS `$KfK/t3Mt5Z&Cnf*.eZ K[ܔH-QxI 2ܩ%:SLXgKfIXlL8) Uh̀a,^LLhm%~~fE@L@[F˦$񀨆0"5VKDvӱH|Dna(~ +d(\R7p{Ĵg$YWdhqG}M+@Yo9X%Ӗ锔1cAǨG'R;!ANt}Tdݼ!/(7jqZ![߉ ~W1h :pUo[) .E[>*ˎ\Q=Z$#V)Z JQ_h {UqS=>,6&Sh å΄=ocuVwDq uzd@Z&~KM.@.s\e)#Q~` JHk43I"% i4EVw@mP;,{c5'B}#$1C n"JF;qe}ij=9Zߑz܎vkr()SǾsuu |c2 *TCf"%q %iJJrkadSq0'9!NF(07).j!-'Z}̦#C&aCP\301 1u RRc~h?Y0oVl WT"z8Oƙ}gϋoGۯ~}c~ސƷeL+ce#kt.iJ+mn):rXNY&q>υF_/Gfbͼt91!\̗b4g(-lihlZ\@`}Uۺ@3L+=>Pp'={B ӵwx/R/k%6ny)yT{)JR":,ZEt*e9q[@hM e֛| qB{PP> =+ISgRF%6 kqL1ypkkpL\j֕=~ (Tjd1`, >n<3(ձȲ-/JwekFÐa:Bp|YŊ߮;;=˻+uĈS'2;3쎶5 2ϴ5tIѹT\Y=Y\} ^P?B[;HJE߬G2_]8:A 9"Ѥ7+A̷ZeEo4'|TIf)NϽ1s/+7 hrr0v6}I杈v`22 Pj\Z\`(Hx Fu Å]f~24AzQW{rWW͵ 08g7)UĪs7[b3;:-B(CC%+Vl lBk)p Gk A 5|F*uZ@?_5(d/R22iiDzJ<Z)4Sz⯦5p@hU;|X7ۅ$.I63뉺#/3j);Dw cEDdgcmҤ{U~ޜf"U\>}‰[n}l|m?8w2n F8JDm),|cB.BFښա%_v?ɍϠ U!0E)QbIʄaz8iL}@AL4gLl|c)u0sOǧyɫ==4әa[TIƘkJ&RfJ.!cTl3M2:&5J-c1w`2Gk-,X4τNa9ԍ`$rBre%bc&S I4Y@HTұ;%og׎<9F<V,Ւj=U0[X~(a2M0vg<ÌYK @Y(tcY c9BFu=:_8Lw!wx̯g EC$e|&+uRW*T V/FMM18k̦4k}9vm%q}8n!) L;ݟEFk;HfIM\k@ , 99% :5qNzL^V}Z&Ԑa땢`**^=^ $#a4/2c!T I[?*!)DxNFxdbE *8jE U Vl%,tE&SކT%1Q*+Q}`AM<0xH3g,T2J9 ؄ 6HnQ'iaik0*DG%4lK  Nnݱl*8f*94tq8!=9b[n˽;C?]c0b7o t8]P mtc1yWϘSxͼS $@{s{=mp*[E$1Uԛݘr0Q!_ҥTt~}!EQiMKyVBฃN  T )t.%xOSRɜtP:.i>֕ `p6g:I*ttN&8 ŗiE2ap83I]cljQ/=D/R~nȩ3F( BY'D)2E0VbkkS0hܹ^K51q{>C$X1qQJ;Ng0|Ǿ\\WS.-,r.(#8Y8 e<+SBXPXSNEQ.awK.SVtcl 21 :/2IX%;hTQJjfI@ G)˰ )eg !C92 4LK!AI9便%0-i_!EFp(U8[nȷ݊# b#{*8˝Hnf_aHU*M?N4 e>؟zk䅝mJ8~{)8{n|fl%GkY*-RyJ1.>jGsA^w|E7?߇,f(ų:.Ø3~=YߏyoJ~ٞm`]J%( F"no|p"=I fX9$ch (f], ϑoT@dp?Gb1ER2iniH `V"YP6]WȠgj =v_ `.FU1)~@9惥0`=2Zr8%3?<늢ó)YWJP_#>lpF\|Eb.VߍT̗mltsXP,K 7vd7˼07?~T,KXQ"/m!oc}6ʵ32>\]y=OGp pb[kE7:kW?g[x7 Fq(zGcQJ?`<<E~[gęSOY[^/^xU3go5;kud}zRi3l:GXKfKP&#^սq›]H J@Ya E8H+4ƖXhM6gyr?mC_/Oi^kV<'!S-!֛Ҧ-~ߍ2CN;jvnngɾ_:s3±m-(&Nm(н~Ӵ&K7 +槧$>{ξ 稇* *J273ntg}3_'ƃ&2qm=ll###[-$$Ҷ7hVW춉{{dz=b7{#m.xmvm[hxGab61njg| [?r->fWid,.{{Wr8xE?ttCLZYv|1=r P.NO$:[q6)f(6ZW>YKMqL`v+R/68?wk a}/Ě)GXtwv=ҪP5ɆX$v_g:dC_~zgN .*_ UF/ ]^}Q)pMmD_g/5wJJBz4hv9fyX#{3+ lVq~gWƆht>"# Lr-Bo-w|$(Vzr+RI Aš"giT!H>'YIaTrIVEŸµ_JdIo.|}HtcJ1Vdz3a dSң6drs\FJ d*؜t0Ctr*z6B5x7N-D|rk#opȻr QyE& x\v"2M&0f-;s0~ `L1Q{bnwmPGL#r\ ]1D{dRTbO {cDkG:yFC3yTS-U!fwL۶ ELjӽimވ&Zs?^e~µ`2*3Iҽ`%1Q6Y(tW1y MnIZ2b}$ 8JYbӡx8>9;ϣ΄,d`~Ի ` PRK.UT CVCT@ਃl6Ubɪ$QGL'E)UT5P|3|YmjSmd&k:c:R tAhh^zZ)R<7gu$(!,h6"Uk\%ut?fL<͚w'{W߷TX:M uQ꽬n{-_ˆt)+/v?}7gk_eYWƷNrތb\Y01LhşK5y aC$us.Fwcԟ?O|x ۩w_5WB&5Zưެn%u7ŘI Κ''R^@ (z jI>҂*X"a9HUT*EZTjHACEC9JN%͙(5FP%w gUa=[QOh֍9X16˧ʞ2g>}!- Z!-9yH!(/a߮&8%ʿ.rN7x"$(/j[3Z[ #&BFq}1~-yZ C n0ɔ޸Ja(Cw%*c2FAL~ B0y d$O";B?ٲ N!; N{-.jO6\`@/0)<&'}Rl6;*&N d[\ Bi%B@,<.j^^k3**{`YNC>58EN[g}֥w8PSH}X!em.zRAbu.g _/S+gu$Q6sn(-W#Sؠyx[SA x1QHZ]aL+lCUX АxD9d{Puq* )qap5 ޅnziSchp6uva(@jଷ9K\ٽV68Vˀ"T!YPY-)2&QoPO[A-쟜|4xxgqlWb=Od?~YiAuiU@v~(~p[\i4^W뉰=%cϟ WQ^g͟䕚T_xO;y~f. }M8ySZY:Ԝڌ.SH B%F|n/UGz\2S %Ia5?M1V_d-z[ˆm "sLK憍,:*硵⌀j(Ejƾ0؂6+V%vj7_BiM(+QP|A'J\23IdDM&n | xS|n+L6eٔT O*1&CN]>kYdJ,YR5jUvցEՑԻv7 [ H (c!RKeߦ!qF}9bu* +3u#:$մ*hg}b.]1pbzPi%Iؑ A]$"6χsqcp4*:O^8w!`RX< E<+Ekګu/*[],@H)&%\RABd%S"hd8~~i+`gQ $bU|GG1|c<8~pfÊ f{HS_p-Q {eTv"wPAi 4.T]p7@iUe C6P"UW "UsN&YoUfKc$ܷ_-]X{F*8TE/X_ J ?&`38lJN]9 -qzǣ@HLK`8YSRdQ p/+Af_C2c`Yla{.򜄽lum+ڬ)PϾX Uq6NSμM>Ytň vev`kE8ӎYxh彊0S,ٻmdWXm[[H{׍gvۛ%U>w@K%Rsj 37]iF2ʜ+8Jΐ1H 3?ΐrA1`(K(b",~rpeϣ/ƚ3쯚#Cd YsR`^K W:XP 6 bFcN7o#%N@YH`g3aG#ɒG`W3E*jV\v΀|A Qqpͽ\ 636N@KQP,8|26"Qsa+%H`{{qDQ8qFk@fr\F R8)#R Ǩݻ!ee`EqLD>E ]@>D2A #f&P*qiC+keg=zI*0# -c+ THGAG\R {,+[p8v@NqNHc%;U*p$ |~&$RJ2ZhFG QJXÙʄRW1GSF#A)1 G2ƍ)Q8 c11̌EG(H ye4f$KO4uS(.up E @Kq@#}QM=O9& 7)yN$ }$4,jS <|[KO`W?_ JKc\yq<ϭ\ Ko`)B0g?^.H3ʔi6fZ>>?=;?OӃ@{6lS!i)޽1h7yC:eΏΊ{GsieN.$|9?ypb F~"xy|Y5\ .Vev*S(f;C ^8<5NQw5V6E_~88 __|*O?=8?_VmZAeš ~wrn?MA2 u-/걏%KOhV vݚLgZ~M9 Fߕ=&Nn<ُǯ>r0Y򪝚Y)PG[E)o-|"|} Yo(enS3V{ku廦_ػk|)5>_NەXCox3v{.~xd?ַO@/G&qquoOn^Tۿf֋?/`f_rEjԺz+mn¦z ^|=z:sqF>-m^v.l܆Gs&{q߰Y}O9hEvs^wGi)lL/=ɇ7‘|ǛnО4ǭu]g};]m+C~7K쨕>Sݺ ~}[HIV}/[3&X9:VOpyc›so6O`\6tf L+^ؓ/nˎ̭xɧ=c݇Q_oa4'*mZ bt^3Pr:=r4xOԔ]g={-Q,1~HYɟ^-d7`޺r>aQ׽lw'}>dԝ|v|~\"H& ?[V%VB("jCNJ1Ky^ɨPxɇ1@3|e|`t-` rNh:G=vvX}ZY>\~ߍ-|nj]4e+9N}ՓDe+\Y+htԖ^牓|Lt*q}(2Z8F^,<҅1NY]ojM-?{S3JS8|Syf;>&|SS͉ jLĪ^A^eV~f3y!EhG͈NQAi`J`)aI"=D:N"Wessx敃&39q2=~u1UT8o#y Qj`źM: B?j;z t z h jD SPN^aRjlf|rc0RZ "JEYűTINS2W~ )<+K̡ {iӇafiLjE" --^qK-;E灈5fJ)ff縉u)nXLLX[ L!1bpѐ$R$(I*%jdh*GWx_fYsx$'H>~k_VѳjȔLFJ4^mmOR_6>㗰#{QvɄzT@j&]-O)MBsqyKoOsnewd}~S9=->X=UlH #Ę Z;0NpK[ŗsPj~HM WUM}Iק!pNe؏PN,'O|= D>#glڲX+f imz#U‹P0t_Zcl5}mւgBu5Vc;xGs1 r0]G`ǖ3JY8}Ll L`&RIMӄL/]<$im,و;ba)ŹI?ϡ\eql>"Ҳ:nI5;%\%sn ߥ  Wif(] Uj;eKo|gwQe+ёSmSٟBE9էW;afo.$.9wgS=d&HV3>9ÒVR! 8X+MxeoЗt5﮲fb='fZ̑ɇ,ik* w"mgnepcuu4TKKYH#ocEB10KJ#f>35V1q{t!+/QJd;(“YSXuw6:[$V Ξ@֗ XZ15Rz9tS|C]e+iuV:+)(>::@=@lu|6V~'tv-}U1x>gCx)yu|z󥘩j*#9 Ζ]S^uiEv 6 >^_:x}߮3đy[N'̀7s'''!bG (KCt&>%rl_{U7=R[4] bé1dP+cBVNeuLP&bPΏ%bx|sݷ _toG)I()ZcxG2q "6HŔpx50Ib cNTGI5&VKek|bB>0Ɠ4$C|08b!u셰yVxv_/F횇WңųKq5mq1ŭv>4I9D 1 ZŊOH-GrMyDZ xXC&*Kڈھj}o2rnE- Z 0 z郙t;>a9=xvnv`%E6Ã$C <0MڽӻbL` \x[I׺"pĺ#NJd Kq\PểK ( te&U/ZO ܒhǤegN/9_$qM(N*tVGA+K&#~C[seqy8ir3OY 16>~̍^q!Lk#"ß?V %Y/d#8ӑ 2$#l,s "qWG2zMB$V.B cˑ"E:2R%uek'Q}ut  EKX!'"+DLEs`dBU2 B Fh%4d&B!(b-C: b@0;pb)QQl~^D'J(M6ٲi04l՚5@kZx |@Tq{)M|Tݡ|m$73؂E 9=M:N BBnMјt<ؽmvGVIub`uƹQu^i' :Tʉ:ۆiՎZQ|mI}bY4$tNδyNĒ" nľ+w1cmʦlj+++6% \j % -70qP&[.Q[4{â_k7\bt1G>_y.#h@Vb S5K5!nZ+p_w龝RI9*NITsl1v鉁,Ohz$#w7ّ<-nZ فpc?x`C >||{q՚/9qf*>uN8Zk8)"5MO炀Fs-m6*5+4ιZSѦ$$i*)]<[PE<ȭH_8:bA B+V*7[u8;Ep/N^8O Ai@sIHaP:PkyUԠI"/IZh;/wK w:zJfB+|Z/ չ:_ZUh/ :˲: (VB|Qh.A֡_Z]Ce uh˲5N\~e^4Vl v6}[t6ZT#к = ݇|g3[P%-RBIKJ>QVp\]B_HA4 V49F AM$anh}UM.cw1|*1S%VOֿ7l[+RSCZ H>ӳ˷uq DjVx綹AEF>|<Nxdrݦw[ryP AS7(o͇EzĈN k,ZW`h;(3Q@FkfũEyVéXX\"`wskUĤf#qh:'JB5dfFx3(v14O_HԔ%guA5$՜*b,%(@Zƕ.bqYBǷ5:bpsf%Vlz&ʘBCT<4WMM) JdbΆG_w>]{?a?h>H5W68mH7i۳n6{#ʣxɶ{"enGP>ݕMmK-: "[~mU'#9-j|-Vq_,pk%Uf5 lLݪg1~ ^&ƓVnzity~vjbd@W,7?_cwm \cZW8(1;~ ]F?^#ߟr ~a?'Cݐ{\M\>}}HcY2ɏg/ Eѿu@8ٶ>}wFm+btY?QօX"#=XAVz/zI_y~'pﮣ[~tq~E]=%CS`$u_ տ˞> fbSu%Hv~t~Ez"oo@ഖ/Π`aO_htq+>{9J/?m]{ :]'~d`5] qٛI|~ iɼL|\lZu[TwA,0A!>' pjhCj#F?jsxIzG;g5WAnf3ӮU-*H^\5m \L !%8H(F&UULzͶ }Z*-m5ygBQ0A竸kfb{<ֺ=Y!Y ~ןyjWsUhRLHcOm 2;P`&[ Zu¸K]-Lq:[;vb)__Î/ꠍiCRZ.Yklq3Nh!]IdS ktUmNBюR7KغM$DɕupI5;h^a> Ep<غ]@-mngp\ТPq/y3}=MH6[YREɉZ)q4)˽wY<=xIeZb_+ױ8 ˀTP˘̞C H12TxPF4ew3Kg$<=xsxD8x©-۞ Ws)ʳnyz(e}\x1H'¶\URyFCw(ʖܻZ2VPNUބ23VQ-|s"Mm2\lz (&mBl=E\InsxUkwH6V^8oU @sYh{,T6FQӌSg!y늢7Dbz1qC_)C<#H<(PI*an}'%3F|8 A%6Gdz2B0\s_ D}BB%3CtA "3A'6pIC ZBce^y;&5g4\ >i&dosKu- #,ܘxKn ݃OAo4l['|J.NF|ӊnxyWO2-*[U7@ESP!qԻ]lO6]Uu dr2 U!.!~h+lö01A $v8+_)( EP/%@@ :~h23 ڱQTAƕ҂;g/55/(Qu10 9cܡz+80 lf I S4|e9,hjœYS)R.[{8K0i}og0 ޷{z_Cɋc MAO5@ȉɘ;pٯs" plg p40O8Qٓ8ؽh9ڔzRR2fPWh!rYE&uܭ( N16n;PܭBr2(俜DT΍PD V>Fā:duI `Ax!\N p7K4HH4A L̊@evӤtLL'(LTm<طI▩Dm7 Y\O_K$+k>tKAW&FG`B11 |N2},BAǜ1phbLFJQyj R\A{R/a,T۵6t!-Iޠ]sM#idľ'qO)hc6pvrgyQ g d '#S~"Wh3Qo"?sqc4'sIkq|U:>f;>nӞ6䨄@D@TSϙMr67z2ȹҳ1v/du>ZKiH2zɫtr1_뜞ىsz2.q@#,`xe V~l}梾q?pt+rQ JrZ޺vZ>u6JF1n}PIڹi00(7Fco\b40])z\mE-Ó Mwp/̡)eS- qݣP"i919w}>ep$iS~f[JCsv@:*MWtLE1~t竎2~I l'V& &\5Tz&C:9QyeDV8 Af gh`*h>^ }yX5mbWj;Y`9*AMҰ#S@b8) _V}k{C۳fYˠș[W {34b_'j=ԕP*sm`vU0Iϭq}+I"Mӎkg6rdLEzwud~/*+N `91IZGuf/^x>Ov=9YZeo W8wsڪX.k,66Ʒ41AzQ2HZ6uʮ\1bokqhu(]>)f^粶v-)aM彿npvO+0<=u(קgj@4)nU9J83;O0ͷ#;ޑݝH 6 yz!dUO }ᤚ{\lL /rnԽjS/ V/C_"9::*"PJ@9z#/hz!Dfq)g&H3-EG^m.T|" &חwf`z3H>GKs>WEƤ.1K ){MZE"QFuu=6m wW,α2jH8TNt./SԓR)([Xߦ)R q*/KZ)$*( Z`t,2?tߧŰuooRMvjr7٧>]E$dkbDos#6嚛ZS`.T0*5 Qp+녆 x:.)alCϓ*C:fn s1n]K9vLH%֔69KR\.#ZlL.4ba &wYpxW˪M#Nďhmܨ!TpZ\m{NbЩ[T%&%;/>Q)|T+D%8w{-TuaOyճO{*!B;dN[;UQ?0| cY9!, ;֑` :j$_&H=8[/[J5>׶Vp1D'*Txf:ܨﺷ pIo=3S:<||LN ").ELH=$^RbQI#U:@"!58yD%u]s|I0` =P1B41; h$[?ij{vm^\5!y9Ƞ^vO`h7㏽Ap[o;^X1yQ`Y^G>|}kY2P!HZ?YvJu[ևY[ݼ77WLJW^d;˝X3o=@^b7.}>Ew;`cw}Ἠ [=m:4K juz]`tzc+w"?kG@{ :cуW6:0OY*ZG'W'uq%o4-4Hd# ƣ CfVՁZWgió kfp JOrb;o=9ycޜ_Y'wzyuimk#NjD19^"x2˵NO\:=9ztV%nYo/N}zvɥu)_uu^:~+bj7m9NC룶~}A171S rg`sISs喧~kj-j)6Es#:j&gXط(??`8gi#~^O풅C9Q73n`r1A^kJut2bX|e-u\4̰UD3OX/zk’1Y/QO1A4zJ9V"]o5BT_1@߿ [ GHya{_3,Qj%5T`ŀj)h_u-Oܷ)Du+c,0g&)q S~ujHe 8wcllJsEt^}غ2-<$7-记5k Z>Vuzjhkh:h=Ǒ)2BuBy90! Y%]]v') оMr>uB>r))t@4 JЕ.XwV,V醛6CDy#|B"^y3TQ羏YqH/]Ih:p( |R0Gݺ(mG„(ز\D2.&`DxrD0%tH_fCWi_̚ecnԁm̓\e>Ŭ[`=2n m{OrnPǮ>=pR͵*iXIx1ߧ/,lRE2? #ptJXr D(ΥXj0Kaav)/_LT5\%\py!Xz>@@A)_ ʵv#}r|ߓKũ#+{\.Nr,C_|\])jv [H Jb$+laWy{Ѡ|xf&Y 8- RUsPՍkA;L3D-wKl8ENSeU zǂ (z_ weŮ]jfsQ -E*jOzhW[ud HxYKN.koUdmKe4Ɩ3ipGPUG"FiDŽv鲪Q?l c˰+Rg8(q2tO1ڂB47I~?F#W`&A%Uk-Jbm;UIvtT$KLYdVDوmR*ZQ#LI/S1:3'֪RK)V'u0'H{.U&W\%oH]i?ɿ#P%gƐr[i[ [{u{uRCEtD۩$K&eT!K Yjvbp@T$e`g (Fz0 e xg=Kh@S 3˚ִaF_&NqJW]bm(k:>W멙 _ZElDAa-8AtF.JWוU%ra3)$[]p۹q nVierR{зI"݉0ڔkd-ULY/"1p6`JU Jf2džאQ$a7k_`ITC`XbY9CJQz3n,*C=( !E0ui,&SfA&$^M#GzLyf~V %|4#X{e0ҹ@S`?`N[ NTG1%6%S KRJMdg0F"$4s;$JDԄpE.xLsa7\:$3e ktTFs0oL`("* yeeame I`VAo<;Xz/V+ϗHI{FY=[Z^34>~1QTB~Z̠Cm0:0$j.m☴i"ё @UXU 2ԃ` ~ !F5V-Ьߦ:E*P9L?SP(~D4PMm[M'iS`')<{9^e#f:m>;NHjI`EFbؿ@)M<#o2-, q@Pz`8{]4((J 5X~ AuEP Oy)t9AzU1**^1Ȋ#Ei)WC9 i@]Mf@?iC9J'aX; 6sA߰br ;db(L%:wu֝wQ5XZ3B`MJSDRTTu@"=;*VJv)fDLF4x: fC+(Gw%봄-$>MBy U5q|k -P̎}eW_ZitdrA3gܥx@G\W@!Φ^k"kؓsDHTT(7 "H"h! [1VdLC"=l1 i]_:]PZ!%X7ȏbX̶=; {OQ 07Y+|Nr2|Cv^?((i~})"iw_o$#cy }l?ۀOqKji`|ܣw:#47-$6/'b[2E%0r^:'0P)W klu8P.d}J~ޱ66c:Js W%XfLSPRZc+ҰY m]EM=FK(W+^% Pl;7Εיo䶯k)Z*dt: yW=l%y25*lu\e+S9lRJYyj2KRaR8OW?K@W+^h]g]wͭ[[k([&hO'֘W]>Pew{o$Ì6rEWc8̰5X(u1~Ҹ"ԳZ"-Ne}%R:̱\nº+bkTv}'PyF'UNÍ󂊓UbNļ(&V{IfKcR崒k\Bs?Wm!!wcGOJFKցG|/BR ]ڡ%WJ,so;hQ-`$hK'\{J+V*$$@1 o0rzٕś: @K"yYAyٍ(:AeeE&%rI"c}/\5ue(B+ WZ&櫋h3m#BV8"dU#X}D4^PDC 4c/jllRa3-ZuaRa8HY32! c,Y!e1" 4H$G ( ^H. VR}Ek.Β M mHEA@'D3:mu<«@vw|tQ:V:AWj@5o!g | t7)2LD953+O LdPXc(&(FXӎ `MAi! * BX/!8S\\jK?/ATvƱΩZ:2 HQzzxu(*Bi8 "ZtfWA>"9qغ'N/gqds%ZNe$'ga.Rcxї<,ޚCH84g9:`=-' :}]:ChP*3j=NRdakpK324Nmb.`s Ofi$Im׿\Y"zkxߛ4?0xdk5}-'}s>o[^$}pk_K>Ejf-u1[a}Kғ$IID1I&i7>Iho$O;4,s ķ~1]ac5&ǝ +JSy:~8֘<\,Pz} ۣS{gzpH-2̳Xrn3x{f[a< [z ̱/:bj*@W}^6*N5eus5N m|۹8s_ދ?`4o{O͖`g~)_n H^>au >_g??ڎI;}Zh #@gsOd]0Ҹr1UQ;.6@F0.~ߋx><70Tm$k.n"VA8g/X|σIys\CIFs=X2*dCht. U\s4}7isͲ#FG UkvyΟ&`@ZGSY_?=o: CzCRMQpX`)C&K:54:#]/|YU L+ofd `h3;P+8ƄVؠCi9Rc,3,.̿"ǩl70&3D@ǜRIdZLKx5KJYԇJȼ i6ΰ'r"q`ZiS!0L0qB8e1Z hb9үBL/f!kD!,BK-C1Y'8 yq$): r}P\8fl߫c+r SMKr8UTiTf-U^>/w*+8Oyž"/-U_b/zq>\ï0qZt 쎼EU\Hi#j<^s]rLr-^ˌ:X)1ʍ%͔\Y&|F=fS9ޥL5" :TgZb8p1|Kf&A/!{Z~ljKe9-SdUX 5&e 8S΍&ޒ 2&qXgL C6y&G [FИu/߈X Ww{>qӃÎOuJ@kH12șp VB ]H# ԘJQ:Ar4k>kn٧ #a_!7ho$ 0/W]j,+)(M+q;8v58R&.8xCw$]$g B>b?I El3% Es bɴXaX;vaMƂi6"*؃ym-|/$Bk`\p>M|mOMۛgsOFݍXw}F/<[@J.!9-qkOFE'7]ػZ=~p/%d!{9xH .)uSD8a*Ƭ]-=u s\:?gbB`b&flbnցͨ~$h8S,M}/]nOC¶ ߽u&T1~'~ 9LQ2B9tAX pc <̄h+  _326]r/H;}\ww?-Y6Ejy!KJ zkln²%&J$hHX!Py *s8M& u(D~t诳B:C,{|$M\xcTo y?OifƉ?w7(Vx5.F"1=jN#^F3QgZ}Z~ Ϟ\)'9G->{cZJ6qq7'Л{#0\#1_nvZ \.OfKFq2Bd8ilT 1¢r(&:٤XxDw:å\ 2I=rXa5U6PaoqnAAÇPsXK$-RdcL4PpjzƘR˜dבEwd瘖z{UTemT,k,g΅r*He88D`נVD8~aJ Zc1tz^baUpȠe yĉ$j'lzz ;񴺻)~x,CaN[WڿXvY{*=X.a!>^B>0T2== w+Y"y)$/aWJY.A [FtP-atɼuc_+_OjZ6G75sؼ+( cݜ^ Mس"u4e#ԣ}#ֱ:rt[?3KDz ;ӤSI0-\脔q$#;h8ROFkN4R`RYL,ߴ0&C]fMpB/^QԿ, ~ŭ&ry~gj`J8EFԲ `P3ϜAAʑn,Q#K]A!&%{YAȈaf^Àcd1I(1oEJDp(1c>èF ` +.;hrS.FQxmӏ*̷wEKa #IY=pt@ f4lcgGJ˙t5 Ike8 gTpaX4PRh!V0V+*˹ݭQbU3H.NRTyӯaIh1J­ A%PF|L~0[EEIOJk: KOx yR.ϴCDayV:J!#5rP=ANçM |W}\yS;z|BKN+ݮ+%%T>|ml7*ѱ:(8 . w!'̒2E`iW7)>|r IdWgΞ2Ct5⧟@ӿ~z/s ?GJD ]3zXJuxy1"D ^  A]]A]%APE\ Nۭz#aӾ_ơz:yW NE2O*/]_~⨷g4; R" R"VDN"\ ޓ4R/z#XҼ_F,K}RBEѐ$/^1TX % Q"CfضSf4uM-/MJѣpeB:Ao(K[KA"xSTCֽ6ίb[_޸ goZ%v :͔^Kn@ ZIO"sƐ*G1qtDS fx{onqsWf}uO5r'=sz毭f)8* 2i@gZP*g:aikm ܍x8NScsLy^ʳىsqǤל,di=YL+3>>>m}0ϗaҟ fӟvSD|4Bn笔23P2/I0n _I`zy ̀] tH{0,Jf'y,1_Su]~u2S2-밀,j zDC ][Oׯ|x mfb'sHYXsp>6:B\ywLv(H3tPY)ÙU޵r$B%6c[yYYHvI6b":+J'gm8=CQ\ȳk0$j8uݫD"t08kdBJgBTD3LBvZ2et b-_[hYo;v>_n zr_[V)0 !-ľ|rtA,#@w)"(RzDܖgc/6 q94z5}:Fn?C\r^69E_}YI?\Qmz fn>Kٟ C.3mH+)ߝڟ>vsĆ? 4'wR O.pT`& 2;᤼:1}/g5V='ݳO=y;\>;edDn@[Vd#{/vny-N,dh͔f#Ag& PHF ;fNƵ"0ؤS,j` cO>k_=JpLvVׇnŌZ[!Fm!r9q[sffqה]1/`mH̉[ oHQFFЦټ@oŗz. + :U>zT& DR ϲG)28AURFvYj~3"-5;`eޥe $h< ;bQQ0P_n55*wGS11И4oLp,UzDsكWG)uǷ``tk].ߨӟC# ة ^]cTH[+?>9!#~ylg5: XEY(+5F2b3yFrji>)]CѦ[c_{_߾×g1jU]/'{rv6oL@q.E$)!N:)t:Ksm9tvbEPHeAg1DDt1gtR"NdI&.ɯcIDLFmoK4eL҆vxjgY `5b~8_5w)i,֖dž ##6Vk;\Ind#&r"@ԕc*!2,+&u㳓)BL `{">%} MVy.`B !@ՃSgUġ'A>ə -_@Ź|ʴJ3٘ XТіY 5C#B&* uO:j^zpU 3>PQ7(&Y!dkƀE'?,@jZhd0\vX!Pz]髨(ؑ&__,I;Gn6qޱj]B=l}2j<x8i4Z$`pCz|sRG &-GLZ9`四8;VwaRJC=ev'>U=-y'22O B/ײ-𧇏 =e  lZv?۫=Y!3X3wiQI\_=RZR\0ѬqA)z;F_̮K&# ̾}OA/nUnㅞ]xp}jl }yqwUhGj>Yz Ę#`&P:7^gp44՚j؟dFYϸ.C% /seOLV1خ=ݔT˙#+N&'Mtu뒥%ynK\Z_;aƞu=t4?{bjӬ12!"%` ʁo:qprb,d=zh.bvmYl^!!IJ17)k!7O?igמUdɸ XZA Ch\ɇFkDIR'I>h>`إe* YΨ "uȐ3r,mFndB74&2|nt|>  Oo" $H%H1J˵!e ULVKz5*HZ5(J{V';Qv턇BSC(jt"9!@?&lA2`\61sM~W2 L%cۋv cʼT'T[iE00CT9-)|r@EBb5dVؐ|)왂*]G- $.scH#Ʋ A2s,dЎk4Y@?AOgIJG};3/sR1Aij_ڽ_}|F\V>{༼UG寤7C;M 6ͧ|>ЉAEuܯzof{j3('-ߔ-ɵpѿtUۅ[8/{ڧ?\B_&g =(X=s WВ[4ӚF/ɖܝLWIQ4#ؖWqa7/T|SauZrg OY4{I | ܁Ȟ#Ǹ $5i] e2͜Ÿ͋ݳzc4H7\j"uc&{SBj^(_#Gk'Gd*+2"WdjD##Cc؇2$+K>Z"G$tRQpA-Bp6[Zѥ,ffRIU-?潺 xqd7&8cʼ4X=:*F.^V$\bܳO\lHN"^X8jeFy d/7if[[l\̸M4r-ڑEEhNٓ\"$5fxAJܱB+(oE[iB3B5RqC81-J /8(Ʈ u}қ*Z] NuN>f`v"}"!Xf16Nrl|P1$;u`&("9cI/zV)c 56X*D 1pCޕ~d{|6!/MIiC"-3Jȃid:*8k +I0}A W8@ 4:{9k!}.nH>.+ZUfTTBFPrdNLQ{o?}4`ٱ˟I7-:[_ jH3vnK=EL~Z6h/ $ݠeg4+##!zd3uǜ49 2pn,1l* 5 -xhzRsN˟ rr2;9ly5@^wyOw+*_ ~H eT4*>17bQ][s2{|rRlGH璉ݦT0x&"BrtC1nJի☺ kGp,1vTq37OCgbkm`#pX?gY;~HL"'gqPMO5daWn=(]j ӐG5&uYmbŵrhHԌtPRۖȃߝ4f7er8mk% /fgWj=? u@MHp@Y&%+ uBQt}ezf V LKSqs` ,;-GO]Q/N\JF$nM  *!98x4GAP/N;W'qyi|mnOa֐QbYP̪RʃUL.ω9f`t'[Piy}lq(vƇlidb( )\ v6S@6cJ%wD1 |D L9F *􄑔ˠ5s"V422͊ r$A)L8=XbJ(uWuok0;t6Q ;_1v%U; cFSpl; >ki׎+UOHTG%؆6\i8ۭ>o+\2X'!"3lgNeeݚ%[bL;:0EhFAzs.CLH$iUOrł޵oL^ސpwuyfF35a+xQ݂2$^[!c)@KS0/.12Nh#3mĮ >\itNF\Q[Nh]2є78F<QJ Jj ;xXΊ&gLsl٨hF&}ء >*.Ҁs{Q*"IF*e^V#v}6\i8ʬ5IrqBdDEDA1Ps9xzd28)x*#~g2A귔#"ߴI(gnfc3!HJؑ#)aG ;rI0{4j*7eBNB3f-'I0\,{gN(}#Dul#/!`JvPL]q\-K ifY~|-vQ&pB@dSŝdybA*belR(=Āv.f͘F6\d)DYSwI3u ĉ]2K7&Ǹc|fh2@(W1`4%H=OQs<84{5̗T,ĝ5t,F#-K#pMA`X룙⤖-k͈mL|8w6!Po)aebT֡۟E06Ii(&5.X 7@G!LD}$ߗV젊m&yqʸ44Ѡk4jFcyM)Q_θJd)/czPL]qZc 8p/?>geDcu] QpQ08q#λ P> ̼߬L:4ab=/$JH=A5' KrNG|z!^ND5ĠOa%*_sz&~kT03D3AȠLψ Z'B3߯\O]BQbL9 }i'1EYAO| O>m@ NIh$Qz7b7.m5͙n\K9="!#teq)Kp#׷"=˴ kOe3EmW2v@GtX%w jzw#~;z Tj0GjQOo|# a!V “bG% OZwba.N>n_}̪.T̬ZcGQ難InчjOPj&r,(N^)dYU?UUwju}y$o莕?;=RVnnP{Ltp̩яx 'Yll-x-xzthc5p*'f[dbwң{} #gn dyO[b0mc nf!Dyd"nnxJKDA i#,^K^pK0%%*ynWaaCt{.[w.PlV}w177ב,ngVC\wj*&x&$zR(gn@U{Lc \NΨ m3[_]BSt?zx杬 LT))Un1' -E\~ #:wa4 +5+ᦸyՀ}UOWL|߿) ~Zlǔ岖A YyUln=lAZN'-$ j!;S|)N? &~486#uY/LVk$^%w(ʬtOdVLpn f˸{cY &4m>;rudeфgH'wM!^#+g٠(PQ€%Pu/0lPHŐO!z{0^P溏EUw= efF'&s~X:zbؖg5a][o#7+ ]?],dLrp x(#K$-CJն1BvH0oT7X$_UTR_RO-/8SIIOēxrqI9)'ՃR Ƨ$R4(}r .&ݡ('qkP{Bu2Quqx# A[H-^RbQ#2Nk)Y@Q4T%!zmm^K Vt߾g[fROT\!DؖdUgk4[g&dX|bu[nەImz"rRN W:AiGXYt<UQqMS%sxdW_9z\8ãs%]j$Է76~ܦA9|bfHf\@,sbgJM֮uq7x7oD%6ԱPtnc( ueNֹ]<ލ1t0Z{ZVzemn"Q ֺ^#Yů5wc҂Ktۺn!xVi锻/ڑ CLHHcB fWXjN i:Ʃa$LC[ȧqaG|2%0p(tcvYwOZ-^~N(\v/oÌ6] cP^?_ӴGhxG=v5D4g<%{ŗdWs}{kMz5aS9{Ú'ᑦK$W]PZz K* ÆD C^kC[Hx}~yi䗋D+Q$L h0%54"[T#I}h4]~I]χ'3ϗWYrSp9&شi"?uՉMO\M; 9xB@f)bKDAn6_^V}4>hcA-?9@<ſhPʇ$*V+'& ޘtDџ*oQXaCۗ Br# /X)%V\YP'VBQA@жpr052VR:?j} :άh%gf\Oh1I,;3(~Np(sNPxN =tҭw'(Lא{wƵoLj̘@H;`*lv2o./~IO_] 1i!wɶ/y^eDIŽYx3v8ҭLTl߫|&ypIW;y?wx{c2( g0Bcfj1ٿzAD#@0g4V+= X,IQsnR{Rx$BM4Xj^hi7R^RJ+=B^APZH0X8{@`rRӮuq<ލIMKȔQr9PJ*J)#;ܘMo㾻@i֓s$sSo`蘥K!r&~4@8_C h"`O6sӚ r r (1e 󔆁+ô@Kz_lGugTbj,%1 G7$@ SA"I_N#38vɎ !W yƃnZ6ՕuSTg=(o {uqpzEvzاp m;m'N-{Z0e9r&:^ ZPrŭ)sʫ0bpeܬf.<˭g{ټ ``=xo WW bow/T__TVqm~ϣ#e{r9|n.oD[C3LͧFEb J.w6P܅u0 /ږMrRGrR!OlO,~Ѕ,/Duqy#DE0_zݶh$8CI%Jl Qnݵ7 ۛ۔Ʃuc"JU]T6"i:%Q Vc^ -\N"b'i4(!pRՐz{"ҲFDɦtZHe1 0GvC^~3 lөGv(`2{rv<KI(yqҔN\v<CSZEQcХT6W,#sَXz~Z/oW]ٶvtog~S_;+WξUuy>g?w'%I3}mȟS7f1o+-6o>ݸ Pm%t_b=jQzN?圼ڱ7ZFz06B׻lY#TN2+ V@bBD!$XJ HBU@*~y~㐭lKU7*UEܮ+H$Ž sVn6"!q|gy/0 qf{Tf4lFd!>ے*Zˏ[0J[>ۂ:CP={gq;V) W*=U>+սB' })7J5?,e hEh>KHB;& Ynv:B:Eݎ9AtàID!H4A2uqaFW%J!nն /ڻV:HkV?67l<h]3 M$2sKƌD)a%*̴01Ѓ[~~^:W/CXWqaoMmQШ()Bw[U@8?۞]a^v0 (+^Z hT b\M%0aO舥!޸n\}^WJ=:㠸y~Ͽ^m~nmٿ^]Ŏ,BZuY\"^붋3IK  ZR;!wNk*TxL>pFڄXx{w|zDGJ{Dc. OLf2xR`Ğ[8a4e#%@;͈C̨AZ>I;Ϡۜjf}0:M<ҒA(dZ P0H0E0@q@klK]v|GO!)P!p "L-TQ- hOH̓B"HK̫HQɝRc'FiʍFHq"4T OzDe08w;l{Ɓ4Xz] |0eBzV8#BL}7Қ9j (c "Lӽ$Ocʐf^akQ2 48ןֿk{{ޡ #HLZ뼺oo\ZOه7LnH>$ѓحQ@>}}8E GǰDq,llHKdPnH en|l6LyÅdRͷYb"{u!6I x7$T(db;ܣ'D(Dxd˓mGUN% >C\y#g.%[qkG 㠷[ Y%@`Jpw&+D{ Oz+fLdJ€wz]xQ͹!JI'O NNC#4PAĊ[j :N#J&S1FXOaenFڀc<@Dp1[qSpVXY$ n{:<(7z\m . pƂV08/Gw51?1 be8B.0(rQpI{`T 1~ bPRYۑ:1%)!u3%y;LٛI}LIx*XpRrun/8$ˏV|/x{0 ?lUffA8swhJϟ^uLI f`$F`w,p@IO0+baP]B,@쑽ݺG+-ް*/ ˣV[]،nyX`S=Ϳr)nc>,)7Fz‰yYA]P]>6)DϚDZ)ύVZ 0TZd1C0w D$g{+1=& 4FlW)+4lidY) . #RtGw >4, "}N @{tJیFR3͡TaӌXj'r%X!#LI@=J H -l鯃kas= 6?/P(I0 -DwQ5*ֹUJZ s3r r usR7^ [[c9a #(V]|{[c&%G@!4PC95`k,"V#${%hۅgyz6da %H>K 8KwmmrM*Ml*~Kʅ͍n!)g^4H$0&~pIu!{$=@;ͷ2۹#bq+#/6h+}8@NgSR8 \, -K݁< 6))[]% 潯5L|99[>'k5\>&e# 5A`B^Ud/flca}֏؎a?ʏB|; ah#DQV^bZvŘ$CA"R{'j&񾶊\3Stkˬx4B"r ֏j% -YNM=ܗVB*״@v5+U\ARzG%z6&]&o|j!,VM:nˇ&Cms}ԍƤbbk+4gD2'cI:. b(+=.x%Ax#SK+ӂpeB :E`p~Z561vݸ<{\=b}&Z~3:F?yS=@3Vs$e-tE`("'Xjc2=_\?^wϔ} =YWT3L]l^@䊠aD)!^ Aqs,: 8x{`[$m'`-GZRF+]Z2\5X6 \n#>H\h&.XqHkv-@x:SH﵁q9h:4 T fE[Kk.CP TQ=tt!%~2LT$R /xe`M$.U` V8H; VJ18]꣫$H<8LRP=S`RwѩcÓhI{գ,@q19Fdn㜗^ +\3s_?0A/C h)F1 Y{?G <*ã.SYS{ pI;k=:TZqÅ@mtФ*]dMq0aH i: VPg$r8 ̕3i On!T$.L`lቖV"Ac §;}"!x!o7Q^ 9#=[֨Y?ͳ8S_ݽ/r}:QQ-md_M ;jlL: v{1fȹ͂7)UcX0s6v*aG|eocFZ2<yL! wX\v6;yWk^xC1Z2j%QPdkNd9לIB^)u\Rݗ--Ju>ȭ6KIFGugXɚۋ/%;o,g]B(/կUD#xb8u+ GuJqu;eȔb:u+f4׺А?{KB 6)8Kuu=_詀1"$*bCT;y Yxl|} COOEx(,lHsu%iS^>zT'gO mǣn aƐ剑[%iyR1V@뗜2nWpdFFb}*_D+&",V_&'̶uͰH &Mngw۟gzpҶޥcQ,Yt> @X7bXǝXzb< ԿdfWS{zeڄ^pzwqj35s,4Jh|CLI;|5XVi]p 6hj=jPy|`0m47YZA#E(Dtj0O@,rh LEWĈ(j0ωԕg%!@hY je=|UתihaNF; JIhRj0/ϫ9#`^\{AD4+k䍍j*`\y>6'[\h :/`bv`Py ʁ2 H}jg.:#`>emVf>sPy*ͼ< k gs^ۤ i"l`;,'$3B B%o5xFr#Bec!h? >H$RXo l57ڋ@Py { r6[adLg>g̫Ac?os0WH5 =2\G#?PyQ /XDiWSpNG!51wgPy-h]]DJGl ݥU%)*>grҬSϊlO%L5hٹj#I (joWw)f:badMQƊh`v/#R#d w k1"&RKx 0HQ90w[^jنn Of~pu;--گuś֗MiJxY^[ji8Y=>GH"a=L=Α6ܧJ(F;2ViL*#uf+,P=%݃^*EWd_%G8`.RSƞ@(d F(/!N,X*ө&wP3D"s]RjrlM~++ldԠsj c?i}9ū D?T_{Ъp^֪p{nw\ Ur=-*|<>(P7Tjg9PWX'5DT`s2rIe֓ ;= оnw^>}N<]. "p>=l,C3CVt!gO.dCl~ߎ~~sr\Y`LR L!2R rbJqL݀2L +U !p=]8B8:/p+Rn_Ntl2SYe}d Mz 8 M#bi|:V9QH8#uR6`gCR{ȓ5)߅bigbr{EztZ\&jpw'n ؛@aP즱TRur.;vwGTD[:W\o.2`p[P-ol8\ZP`VߠcC`+ЄQ%-vYimQƿ`0U 'ҩWxTExI]aC#>d?Բd`T,Ƿݗ]\ ~=(d@j:p譺N|n{mՙ$v_cYߘ^It&ˇ;.j D?݄9XD Oo>l[۔&mVx.oųϢ䔰r0狭pۧXjTH3f1Ƚ.'&)8A@@I  &zr L|''% z^D?1y sxw z|9Y㛬OmKŎ澧|ZCii\gM0Ѧd)𡔃A:Ay? > tFSq~>wh39,f,[ƕΌ1OrݝIΣᒜo?\Bz;F `#ՏpG f˥;]ʥ#Ȃq9vG7rx;%eQ"8 ~T"eKF|O2+mH>}q{ c XM!٢MKRnodJ<%mbVqh~mGhDZ$d]$$!cPk[F#`/uȦi!&6x5J%"\|͌t;}MäDasjy?s="6jeIA-2KCLi4HbC3!$Vl/q](0c3  G>iFRG02InsjHErys SGnw2̨Wx$:fa~qI+ d d+)ٲQ8LPN+8tC2 A[?#G˕j+;pt\&qҨ\)*WJw+E8eŗi\_ 5s׼R$H(DžeAXBIb0E;ENV*q" ]\3[^T;Y?y9^MR5μp^-h%`2%޹2{\'_̴O߾MV?l-·攢D $TI?.sSW&Z8-Noo|1zS,Y(B5r<[$yg eS,aZ(H` jjd>PV-T{Jڀ>OnRHwK=}6enJp@̔2V~ÿVca:n.!(: 3=,gƅyfYN߻s6["Kta<(ZVOnq`0FI4V8 gl! wxE}"&D"4Xߜ1\;9hfȈ6!r8%jacu1 NQ]+.d-F0yg2H,dFHLX~fֲrti5Kҭ bU1gJ91KJA2)0qQ:Ɣ| 8bkCP86jI鰯3gABAx1DrB7+ayys2X؟]; 9Gv }LaaB\;xޜ]C]VO̴{ù.itu^`z5/<:8!̡:$XqQD Zq%pwo5=nѺ˖O>1HyB) Ъ)tZ^Y> `Ro9 =xWfⳙ_̦bdbfU3X QX<%`n/) Yx@7\w8xy9#_JQ-39<*u#EL`5+LUE/17]qh[̔S͜uoOԆ)5p{ cO1|g~){5{TÐd{LƓxFtl?*_sF8d?\j++sX[w;ꜻꧫ.@|$T?|ʼnp<ڀтu2@u UDFJ>ƅCҥ׭_OjaQO÷o?Kbw񰘯"/ͧ煷W?_\|u+)'_ NN}66Z(e}Mr~75 g\IM-V0aT`k͔D7BV:ݞ6]>RLJmBr>T'DXM?y.'O0N)9Vo,v8GV+28ɜb "C:G>< !isf{}Jg>r߬l9KʣK(M0f gNQ)*jd_67s<֊Vgls9/7ŖwG >ۙ.zI(JAGVRdy5';&/ գJ5ݦ㊞Jr+ՠ_r+1y@rFOT!U890r`LM#;ioJE^II !PJӦ{ohr"{&'l\Y1zrgLO޴&s'O#_-Ș[3K` Ef()3@ T;ej"`5ξ&̯mwrLk~`gSk=jU(e%SDcF(6- Z*f{(I'G =^-XV{к]ݴ.eRvE-zIB.zzp*FG@kPb8phR) ypp`piFvɷ gC1Z?}Q9rJ+iK# ϖ~N N N99>^qFm51Qk~F-~>^vkz`yx2W(z?֮0m׽vuvܝ?}-3)).1aVirN$JGVk'RYI%r_Ј6LGÂWJ c&"5 5X'I@SmElMWBJvʥѴR6䘶!.g2k?0t ;,3}-~m>E8ea\a:A1OO#%Ou#Sl$6kZSť >avxʹaEnK-R}W\9΄3\stTs5g5Wk,>e55!S7&XixX 7˯pE)*n.sW{|碻" og|czy (OeL8X#0cSpеکwFYr?%w_עFt%;~cT?7jPTd;h8\σUd.A$u&:_Uo=7>ߞ2[3.jZa鏡5C\{P[=0So\͉Hzg6A 2)6BYF86Bdsθ% m(%ato3k*H=5$s1-)j~뎏g wc(r*-i'>& [4+!DDcJ4 :Q .`%Fv>!yeYs6|ˇEGE{f~UTe }e5֥Wr-݇hr0ipVb@Z,Gviʟ*|ٕOe$c¡|{^]K'lsNI41HX|jwjAJQuKҿiY3 RE-0jH3jQ+gm*ڏiZЗz;F 9돹Y&̕q tlY{0pS5pyPc֮됇?2 z1($YŖBzQʟl)D&>ڨu0$]aC-jKم5VON8EᒽWG"MyhgAZ3-z E*we}CqԍFFx:yn{ṵ{fx0|yk!yKq D'5{DϷ۳zz[$ʋ[#}~^|A)>4ٟ!٫wa}u԰oVqF 1y>:jX3/1qxQ5GzdtZKFyQ$h]ׯófzWxKrz5Wqh~ٞz Դ]q%{ :B;^=n@Y'^A`¿=9To-vzZ/ikX.>A/>WONqɫ;:zrGUՍy:I4_(5`P~o7QFX&{@P+`=v1և#v>ow/ pAe^Gdk:GY˛橳:xqI :FEñD„=3:ap"XnKɈ45CT ެg,p跕l00MLDd4 ̪5mݴ (#mQL-D6j*đ,FW*D1j6$n`6r@Q0%'cbI`T`4 y9G.-̠xGcIA!#} @xG}l>X}Zrr䃞A4yQ[O"0Ԉ3H`27MMAFC7pՌ[=ȧ,"[lo x{ $XgmFB_6 2u(HSEvj1)d}Bl :HDHuTQ@ûrthFl PAf芁cGf@*̠ZHD/ch)JCҦb+0٘C B}#ziJ֩#o M't:|!>iׇ"/3΃GA$e! ܚ nV՟3;!SnR0JETlSpZ8#6b-<$OW_Oy4qߪ+f?f/ \R4X pYj y}W G Z;VmqHXB>փR!F Fh}84y8It |WD3qc2UqP{$L8]I5X'.7BEd$!şڪ˔RPjI}Πڑ9ZZNzɕl3y0jh%ك\JٞoJ- _0M4u˃1ЙXܵ36c6/V;&zr7Xbw.8u;ӷ vY{[gq1i_orhֻ >}:Ni|(38 n;k"|"=1zsiv{kk L?;M]n+[h~,Yd;9,Ԏev/Bm0!j{r+{^ jO{H ebڣ;:bvy~KFqTvꨋFk@Nw}k7W\At˻MtjhM7_ 'hw)—K'zstwZ{VXrwfО'nw6*umvtGҽi;o\ަ*dHN4MBTfURc7dx;~C88sЀT+Z_Isa'2t+)JK .Ja1'˔,09WI#W}`L g 2X$URl'rʛL8:DuKznzp~}uq}|Żo!Qy{\q#+'*_ܿb`vPyYi.}pކt^~{?ɞV}an~\o~;V}S,{J4_I9su * 4שVh>ηO]\^t$B2}YؑKJr р߈A^<^rVP!Ig7m+~^"5ths<ǚ`/?Zf_M5N~oW{[XDj}bD h1'An!OȥjU?;roKw𸙺m&?gxӞjuk|b zTfw܄]k/aK1''t\=}Y +ă&5ԳqjG8`Ҳ(}us6(Qј:J=@TSVu#JH>ޒ$8;oF6kϺLGv^lA$n}kqT;DWObsYGWbbL5^KR]$Te_5C>>>귴d^'˥_mq(]?iW=)_I"N5[-R42 ZKPZv Do祁VbΙfPY&6,ԁ2dnN-0Mpp,! ڗX`]r YvSL PN-p"P]ꂬaC|_3ц 1}mJ9uf]5S/(oSP"ɂJAm ]o>\%8R|HCL9I5uߜ G4 ckJ-Qx:j1&HLeZ5H*ɑ &0ņ D "}Djt(&.5E(*V'KԬ*b]xfSb S-OԚ} x5tr,*x2Rl]"mN5NGs#Rrp .Iw*UWjEiE}/֩`Z}wjn~_2%B6)H1)RH98u&MD| .Pp0jd&!pA{+GGe"m웏2UGax7J=Ii*!٥R20%6W}tfJaFRS) bҟ QZ\oS5)Q2MpA1&ZHq͠yN7XҌ]kԡʄ I}[NukaJS3/{"[t6l6U^FLi\H0n33UAň]FmQM{q?Ez }Pjl v{ҧ>Z~ }(nJ{caJNTS4W~(u<-S[FSTSmW9{ERFE.o7U>M:ίt{DR|0L̞LIҜ.FOkwsGQBDkv%ERMT2jd=^^uB0x- Yn>4jsYSaVʑөf MuƦTavS2C%8עPSA(dֻVBgQaB񞘹x\%H!$pղ N+cJ i5([b"Vg+@>ोxњiVUκJ00hDܴ Lɾ6?[%57um&2+({()h=j #8`X{V@6>:ڜJܕ,e u8o+{wR wܩH7A{@^#'o \]Z v^'7h=Z5_\TYEwzȝ/W[&P8 xP#'Ei~Wspv~yu\]Vw0sfg9ǿ^zv?]3A~ 9Of c'xD +-d)Rl KDmtCi<% $apՌ9G;Խ~bSJ>a(DtcK1 0 C.vzrz|:i'y9s!gWxt^@پ]YB/!IDA^ܱ߽܊oէcwXnսDQLl-4DiHֲ%k<DVß$]0V]jf:D`/C\þXĨPâ*>'+:C!>ֵT1)[iC-'4T2pF`KF'y1K_/Ʊ8ٝo>5'8ujqS;2sCN\T g@ փ}=% ĽE N7K HB%l Gs]-sd;5DYLE B(O8s6-r5悆tneՀ5fxYz]}׀DXa,n} Q;]HTƒ|bEE&|O.^waG[,t %Q1 麡4ޅa,4d iƂRÌn8 5׈_{GF 1*yè44OL7Y0M>,& .7sT 0kk1[rTM?M G {MD&vl=eX ?NXq)S|ϊ㊡IpE~ ]n0Qkk #-6PkмG}wq؞d[<#gdȒ9 뜉XK%^ecIҞ@Ns3cpa{fS֫}4nptߖXdoLkbyݒ,2n2-I儛؎GNݰ>6.1#ùX1(tozZpSv=H)NL.*0N;hĆ&Uǹcj \Wt;p@s/Q ¡Ք<|,fQ;g[H04gn3u0С9C\-5֑FBmXc ׫S +/C;[!{Ԯ?+cA#km#HxTwC?E*g=Ӵ7U'FO3և!<:1fYӎ?Nh"g-(OS%Jbx@0ٚ`W##>@9)CG~4MQPgGm`S p 0k,͍*eY%gNȁڛja>ΖncvL1zyh ^[p:%g*x3 Iyʝ_EQxc#C0x%σ|VX _h1`9AMb&-؇_+ctb ;b4 NU0K*rpcuQRj\ߗ:aO%7y}j;`/Gր%|yeL;O:xTA!DR%U-( 0ˎ∌GBehB0^W6#j9tEf~>X@9Mb|efɿ[ʜ~?(.*l,Hff& ט}N~׋? 8Lg=|xH? |]fC2s01J& #l;'1X()IgKHC"T]jc?q/ O { 5R9RBȞ8LTh&<1sðl."E&m6W̜!ʡgH9 s[dCʑZU=hfNtsDL#\ X5ypИ';RT FSV;^Ό_D _޵H7i?絘%<&B>$IH&cf 4q& g lEZe靭旓ˆI纒]/+=ͧj(WR,o}}3:̹? w?|̔l ~n|c"Qt ޿{5R,@ެ&w))чܜsIF10|8ċO+(c#mjct8mdt ϲ?Ok4%J<O2~>rE[8mRLpP+1O{<`)-v[\-Ά>V U9ڂ͊y|56dy}1 h^M  =Z*F1@7ߺqe=Lj3p~qBQkj >֏XM`>.],K_b}E;~Bc0-yS^d,tFl3SUcya#^r#%1U0b$AX*΀4Xy.Da牤Qr؜m9GW;E%a (꬞g~,xZE :gaX~܇M^7l3_EY OKe{Uċk|yn3k1AAaJe ؼ^g.'ќ_U\VuѧYJSa TZ"ibDsIbxnG}T4x`-B*dj,k` FOOozZkMn|vDGsrDGyeVUGUS)D<蝉٧i}ʓ@WKSvn50m$НZ˵ه ~N]}hfO ? >逖d8Z>^?9d>"jKnL%Wޔ > OkU7\߱2tj̢xCeo%)'7]fvrZ_v5 <}E6?-.]euד|*Ss}ݧwHUMTm[@ 0LM9usolh|h7N:UHWsǺi{a\eDu\ƺ"43~u!߸SivN+\3.aV{V_ lԲuo/φ'd2NN6>Fͻg gut!XC^R/wϸ"DKo*J8-}mEKZGD ZGy$mw5/o]D󿮖g&bXY6!CF5CNk%tS.o`w7߂/Mٻ]HH%tl-+v.AAqaʔа1B1 $95Һ2SG'Q:vE/VlXDHVR0{L(q3d^UțዳxR%SaQXͻ<A_\KdF~V TgVU ,v9`m&Z(v%&֪XfqbǣI%7Y.h3.eXC:FLǜNmowx&?9dx Q0[J?qݡxqgy6V 0n&1B9>S_haC}v"i28F$ +ZdȪ󩟱 =w̴aeJ+ST9샩KC'V[Ћ8#yB/eRFS-D+bje/fʠ,wΪOP纹{ Аo\Etޱn.^LwAqΎh5q<f7L68ACqu)p=&q=koGEЗWJ|e /wAXY$*,~$- )Q )ҡ89wWuc( ZDg;X׌"\7hmxqFuCB"&SNi9>zq馦nEZViY8˹}gwTWaiAWv|oU?瘯jj_~wU"9gjތq4uzꦼeiFsv\sBcYS6(& 4Ae]r0>GD$CLBVЖ}o]pdkbʁ#8{P/faa(8< 5o9C]$W֋^ dZm Vs59I[Bl LVPD:3Y8EI)%aqJ!PafT-3A$QTa-ă EIHE*Қ{$ :cBG\!0u2}*Tb5dѺP;BI_*iJ d"Xz`NlYXFP)D-hH%Azjt0;\89ll,l<jD[BAM, yQXZ b bȵbe X$SE[\%,%<`Il\YDJ@]-w:X+IUrd{60Au,BʪBGR+=ņ1]FFDŠ,vx2,i#] 5(7JVdCv]-6L-t6Z+*ʥVlu*zlsBeZvPG6{mg&iፍ,֬nƎ]*]؟b%bFF1{2Զr|F!f]R}#bX7LE '#Bc8 5D Ƭh<ؙCͯ::8Z3h N HNjS6M_)YSur o*aM:,㶦Qr 8i6L-'TfKmǨƵ۸ SI_|>W|4y߯CmIX󙏮nt^T=OW9oKYBNg'ǭm)(7^:1!*JPdr@%jG1X߷U[F$d J ɠV1 t1*蠰Vo&o uɇJ G*ΫX}Q$[D77&a&lyE)Q*Q@gk2 0}B c6x:5.mw5;'{W7ꖞOe6!ZIjt"xRjT5-[Èa#3T$' RZJkh?#6(99\N|y8pT\[ #83ݚ1xuV ܓ#VuVÝc(^%ύw͒d=d{` hC2Ew3' c_[O86`1't &EQWA-G_/>&j҄G-KN3֖=9L5D#'q8xn_>ܚ6rnp٦6fmNŽv2cJNp$Y4bA'=J[cFfIu፱s7oNNîeg.9y 9QQHzxBd#9$W @0"BQldZM{`ef/L߇rxoI b9R6^(%M,F _II%#3͠0u$$lk;zX;&n<۾= >H~"Ib6$χ"C< Q-gC5=q6럛 uKzr)z[[l[}jl[}sl[<92P!o&y齳i0F׋J(OgVobX=X;%ٟޓ1Ӄ#~ursk]kq+nnb794j=]7hV@ iڊNR穆k(kuޕRih BJ/ꓔB5SH&^zˎͲP`Q&_,"*I!t )ԡZ&B\TARB>"#?f%HC̉ugS & )G\fFPmLQ1쟒+t*{Qf`?T`BvhPϫnVX{mێFrΘ~̨*@liɁ2n̡[0hK W(T (y3UGAOIJu%4>shkcC‚&gCV6qRەV~?`aQWS|.f*Rퟣσ<ٕo &v=ósOwٻ3foHpSɷ|rv KI,^>1qvCymM[6?)n;GD'9zg?"3^a>la3`*/t#/\mI[$ܝ~\FCK :M O(=sArWxDaO#K$ߝ_sc)wW ,~AP" ٻѵv R1WF nz\qZpuw]fj>^8^?-abHs=z?^&؟|ö_s1V +:Jۃ±@9[ aÐTL :nFϫw(j}ABJFz/RMoTgE!D2ˠbB0XI$AϤƣ)Ui#5o~ɳC?7NlZ E-):9#YF֓2rn`Aun@wM1dlZG,zW׀5H{{姟?7b|)16I7o"arm,E M.Gxnʋ埬ޥ$Usodt=z v=39q܌j.'֓V'2Mb]N֩+ԙ;RgU': pd^:)PvmNVaIԡں=,Lμjl=3%B4O (@VP 2% I3 LRyb2mDGG/kp/ &C@]X5R ՁtZ(j"0Rg_ p&e :p+tLϾ&3yK9>_o۳?E&S_5.g_[/l L^\HCVFhPBх)*h^Qyf\PP !Y3)?c.c7I.ꮑn Xs-kN0:%g j.“)]ˈ$dL &ntRhru幻T'قT6̧۴|,-|ݭ_|yw8 ן??xC"C쫛 $Mm̉K;ϥ}zf=;eהUY[`*4k>-)U{jZ)[YTDgMo}%y\T!iV[m禂b1. [9}$Hl&%9rx3MOK4 puXRO6`{"E(a51##g zxsh?jbцw| "p*C2{I:G_XЇvovV:>ы'Z|Y7='qmy}n-{#:4{ P[^VrNBq{"z*-%scb֮LEW ctN2` (0 ^N"QiPo(vseM{h05BCIǭn O%gJΰU-.F#HZ hTv[)6:<dd*j#yiLy )`(bͬB $x{d\FIHJ?k+S;5 4weoޏrTک:I{@siҋƩ]7]E.Ƣ&|q;m2P/?|^'qyǻgnf\# Ӑr_&eiY(c $L%XHBeBFZITrC-'QԟZ;GMON.T nk0Þ1Og!b~kՍb~bPer>J oh[8 z )g2ĖbJZ'sθ:\g=ǜO-2(v6 cv4ED=e/ @ZvCL_, W(q;<;AۚXPl ,#4BύM}e ^a.;cA8j;lvُ{5m=&%'g/Έ>=0 Sl6|:on#hl$rA⡆[hڠμV`kOVJw,5Kz*,!٪/_]fłpw}*+Ϫ&2f~wW"YMſI,߽&V1F2A115 q{Kr,vnlkslg+Z0-R \|:Lݗ#]io$+W5Y;1AyԒVUqdVI:E&*І-33 {a?90 w]).ROim]+%Moioݮ[͟?Hk%yyoܔhNVRyʡ[Wv@nirrhdP:Bh;MH2bvKAtRF}IL-60u$LI5Nvи<7bvKAtR\Ehhhҹ[TiP^Zq*h󱫠!o{4?7rs׉냯W#OvsvQYkV쏋feA]};#'7J.tW7o h/Oϗt=էO՜?pџVwS=6˛bY<\l>Q˗8+m쾚 NVԸjޕnu󞇛>}|v-ܽ.;Wh2+ hh`Xis+]bo^x]ٌƂjKπ|}=&J>n5g>R7π٦4%!-Jh2e?"ݮGZE\ CY*Y2MRVqyicQj\-nOSF[>jbWטH[Ñ Xx,7c_jy^%8/7_/5 ʾ\~1[3Ͽg/SUFVƟB|egy{@8?_\}ON>^Ro'~X5"w<&gцTL.P"$|t3c.!7ele YLy+邌y)]Qw2 [x~HuN'C)s"f(=f^>ғF)01Qzm%4d(=E_pobj(#j%F D'5UPG3qq[mEn^Y<<5KrW('{:zRΘ{Dhdo_ Չ]^dOw_bϑl\' $>Z2x#Em4U J&솄c%.CB UW!f2I7ɋЌ9t60O F.  oͨ=Qy J .ywylD%X?:]C [\$^}rEo5eCTӬ(|g˴e\Yi3fvόX` ̛=`K(&Ij=lp'0\PZ H [)7V xO8.IoEy( j5Isa?-9Cf(yexϟDŽ͈[D?7g³_LmA ! wf@0gb{!:W0J?xV)f*@fX+ma!?̷=XD݈m^, r[K{ 9IpGAފL[ J[A(<ŵKpe.]ar]aLSx.ٸ7p_ yɂ  Y34y V,QB#}1dgn]HsZfAFY|SRUXdpn %<lZ\*[TypZOjυ+bԺ'-фJHuL|Ku39'pÂ!IaRuR0+wާou::Z̜QR/ݻݜJh؞C@(HOq&]S.bMpUZR }`)u$ &}-^M\ẜ'wtRCUJ^^ӥ)0>`oRRL|omӰ=-+)ĥL h=kusGw8Dž TkߋjP|}[==xщ^TD5GӨ1FK.Oi$ٹܠhhO4W4F',NtiF VޔJ=nm5R5]_[D#oGKR"K \%I՚I2UB1R2^Nu1ULݒ> n!IlNvި1 :߈n#ν4 Nw/d`v!Iz&BvKAtRF\in٭ |p : y^=(x/)I\Rk4K'}IГjȭRa(&XQz(%R. JI[M̐8Q*˒F`zKs<>5HN0;|0nnHecgtH4M ISy nfu rxtdA:Xu!l0s +t(W? @*9TCm4PLzbQRC|B6(؋#taR5rPJZ*ΨMirI*E &LʂQ1P!Dzb)w:?R[m(doCOӇ;Ls,+&qWh+4(gRKs."TF dn :|<ž!lJn)x͸*TE,S3JnP 67Ѐ9Ȳ@PX rCa.tIc7X`F<Ї|$Go8f$gMY*/TG)[UeVԥ4YYIOU:dLؖ8v2keF薽H-Chz91xX9s\0LEn,՜bp¶$~pkU?FʰtR.]Lo:|~C؆-ycHdG4`L=@ac!7m-0njX?['Ƚ럛+l?ư&͇l7s%#7-N(#ZX5O {'q>]EN}H-Z$/Sf8>NN S\޼<춇Sta|zTiC&f؀|q #r9<.Z1}8>I;|QoF'|q5 FqG쭭6 &?AޱMBBRCb3ޢŔ$ L_Xt$LpR}od> [B>8D08wjѩ>N7bۘ #f.!S5ܮH/^q OªnzSC\lӛ>C^͌H}Q)^p{1^)% 8mCް5Owt%@Ǭޱhy.: xWko$Pzm:QJ7t&|(rj"3JO"p#p.0 Cim4b(W,?癶cx(=n\z(RP5$P * + ;QRzVNc"ZFR0RCJLDa( Fik+^1JCim5!GSF <7ϭP:5 !|F)˜ai5Whm Pe6fsbQq$ $DZy7"jFBkY0y_Q'#g\Nü` JnfMZx%vV r/X啯έ1ȪrA0t9D%<-MYU.Afq{j k-CғcXm5J܉1Ta(E^f&RTa(]Ymκ'RP*IDJ CXN.)T]?QFV_X>QvkNI;kDbN1]R7SP\3qhT e*wc2[neHB^7 JfDT [\N抂n缪[v}cMuXW"xo*˾Ń f.tm3ݢOi|Gty'jniŏEʛbY<\l>Qbg壭6.'+h_wo}ݼ2/0Bn٦=浱*\F,ѱAib&g7-v|Tx b9s\`3nsYoPhv {Wa&DD"2܄ݐ`w4 F Ka5(]4],=vRL-0$45**Uu*J7)שÓ'_U0-*͂rr5W(fI&@iw|C7-ZMӆl5(CijM݆3Le1#YX]iisLeg*4xԙ NŐ#Mulb@c"zkwEnQ [= $~Γ]͈tI0iW:gbĕ](ks(6.Uϓd6 a0g@61acQr^y촜 (r4PUV+ 2yE+*KeFXУwy.з]{cy?ݹ@#eWe1k$g ʔJ;jrfU^iE2/iǷ[seէ Tw5yhDz4O!>n{sГ)W&6„u.W8퉖6%&xjMI6ZS=|vi_Z!m\-i`6A^tޛ3 RzZ#pcfoVhT'yZuר 9_$UAUANVع* a.306. ` L[JY5n =gwəҙ*De5"h]8=wnd͹MbY5CݸgV.+ ?\{8W[w3d6/I@QEYJ.mS(KE4~n4_=qċ|=?ZaPĔ`K *Vo`D pl=|!zُ]VrW-~5UVm(~fnPC˳'r;,-; qP7]:amNBKlغBB@D z_ ~\=Zq,<0o0fժR:G-4(#DplB3G8%KB)-S93FplRX鳏c+=>v>6 K2*{ |(Ջgo[U0n }1nQiޢ#\9BE5VŖ8jCi$X7I)0hA۫z\B kMۏ[,t$+bv[d ua&d_ucrTG^VAԠk6:~(%WF$̅Nj8~fQ hS3g8iW9[`npiG8/^i-3^K 傔ˎ-tGސ+xWސIF&b<Ȩ.L^pK4*Dt<̦@d$3[~}'7N.o\T|AލJyB,AHP5*aBpg䈳<%YF!sa4F)CCy&+iN0;Z8^~כ  s£U/svUr{-!jA6Js Q!Лk9K-,u:1)(\_,r11gQrxBҊwkP5Oq~y.8S!)OO!MWx"Wi̞>&?`H+ '? t_]'p3_#77eɏŸ"Ǩ&={t{5QMpyX~[cxOcw_'a(@.vW"j>Wy{c#cHuaã\cKثyRNJFh-3sqD;5^hAS@ĭLq<LO7 쑟Sh+䨚{9P]&ab$mHFOI`4QTc~vs)\G'6U~>LH5\M/6_5 a=TSiG$TɆ%5o4|r!ohזDZv0L^ʋwvT O [ܧyΫjwoF1}Hn5cmgh \Yޏd1;\gw8OPN]} Gj]39qzjg{]VX>mIBN\DdNћnWt[,>;Fv;Ui;|ڭ 9q=]&*jX |D'vv]T޴[|QOֆn),-';y?'d]@YHRB^/S}M5sB^3?)[P<)}m$V({J)'ݵ?xØ[y2۔RwRWJ.mVu?}lB-)MQLt|:S§Y}#%%mNAVZE7 9lJn)L;-HĊ bBzN-? "1K31ƮBkI5clU°83oW*QA bN}Y#dGYNj-U?2`w5ĉr1Sw5 Gb8@Ntt8QT5rbr/)k h l*'rv$mt{Ce)>2S 甘ƒN{$CX0GP϶V>U*'&G"AXY)Oa3Hf9 O$D*#J z)I^ؽjXbь#Qb{#%RE((=VA:ͭr,Hq3yB 9VY 2IJ ڹ+ii5 [*Ģ\$6kT=offi7|/+h}HJ;뒛ӣl42bm}﹢$8( x7 [|UѻCJ!2Vѧ.[?.#'Hʝv>}{t{5Q #Xb43oU> ;nL`Gw%߭sxIL)X 8\bLGx&͔[=EO$0{4=˜oa"cIaZէD&yLbYܶL#٨(҇rteLT^\` W$Agؒ$M+Ux\W1 X ehFӾ s-@~[ )H^귟t7.o['98G2ÊoS8ˉw'' &4gtpPmS@<Rd6(I Ó$ia)ž?5le6a n!02"m o !j~9G2` 5\l s{%׊?^dBh^"E8 ԈXrp˴[޽/հbXo/b:EX~u1*~^]bwQ6aXͿX3[~#G!M&2xxxB {^ -Z<ב~jR|պٞS5[D}gVjaaϵo_K s.\\r.>C0՘WaBd_zjv\@x^s!jlQWPС>xΪmcPfA=EB%6}@cBŶF Pו;* ۾nf}>MO TCLiACKTaBdÖљ/ͩ!ڐTKC ~]^oSMDX |z+ǎprEg6$E4F0%n!hX |D'vv@@6׽i`OֆFT8jv BEb%:ci"м?}W !'.eJWK(a4=~U`ËFAZcT$y0cZ{>@ha^jY}NKVKXM׌yfYW6o]z28[}_G74n zXnk9޹8J #8ejy\\&}ﯓ_Z̮ȹ"A[ohHpx<8A"D4 8, xޑd 83^Ik0эB]χ^$\D€檒Ī $ ۾|+ B*=VD qcfTG4hN5R")Yͩv%z*<&';F1W=^cxyLԆFɔ%n [,>;Fv;w \ZŗvnmHȉhLQqj7Ď {j6$ELid)>&N䰎R|n fҡdUk*h)ai^-W*RJ;gt\OF[)>e14J^d_pĕ1p)>MݠN)>Kg `.1g@+;J^]ީrvu=PD웙5.-wȩB W~3wiKߕc7.&0'嬾4y7M޳B28㨷v^egՕ뢸=AZnLbr_-~T/ۓBZJ!J:%T)+E&)@BX 4!YEXJjA$$qKy# E O4܁,ܠjdq+eξj^ńHZ:#p&[ƨqfPY Th8r-0 M9/w>oHJK.E 4S3*lNVܹSށ)]yT -N{MEǀXG>ґBgS;;#) C4SevFM^'AtbQŻu%zޭ R+rKq?+.{}RץfnjRyKu>>CVuWvN(=FRBr{ \jF Ǎ0b]R@D4y69nDɢť((a(RM0 mM6ik.kRr Z0D,%$3+ݯZS:QRm&tVF8.G蹹̹CzD~\̭>NVЏn:xZB3J8tߗx`#Tn9Z>@ZQk+zZU!xWϫ.jR=#8X5"5)dJcቲ2M,AHhjR&U:RERR2ƖLb1vE% ^V)n4ƆZĊXӫWm:Suuu akBKGݢ5Fy‰kn/oVBz_sK͐꫟V._~߆X-|Ïͮ?|;{t6M+K0KX'ZEXNM! RT=) o;R {>f޽ySh$U@}E-OCOo'+?͋h˵ot/VPf&J EՆEO*5"wވe)̲w"S5pv<{1W)wpsnd-Msj"4@ojOќ=(Y2lPo)%6=2AXv_4k`hf#,(HUoY64B@H`0c2B2l ZQ3J1w*iwUVUFԐz4HwOFk(rǤw1J:U;:}8aͦi;.)b73u!ZOL/&W擝k&,?[SFI\jiˉi#1g܀175sb'i&+m(=N F6NQ)߇P()IF8. RjҌd ҙbj q{i`@J}PMDu.^8:o8r+F pbjCh=7ۥ/ t?y':ɷl˫Xd}7ݷWo$FnPp٘__vq>0!A??^ay噛RerY }Z΁=~{)(4bb`gsޞWrq?6rQgۥ_ܸ{mn !~H$H7Ka7Gn 1,Pc" 9ݘ΄* *q%O=j8^^qQNСEԒa*y60t2[K:z~L,14IL|x(&el Vj4#}2zk}u*_~N1]) sӋ|l(,Ga/7dI}vޟEfssssYN9ϩj~G~w?x7^.jSA 7ʫbEQO,Cl)Z#G*.fj;giv “!op3!(.arT^c0H[{Z[A ޹5C5+#~huW_o5O׋|{gMa7ck?ܬ?d8(O=Y+#ť,a;p;-֭]}}JEsԭRa[s_;m~]V\7|깔O.g6gqzevg#PzWW U 4>r*EUL#)]`T(HDQ!4Ȥ@ܒAe:0xZM'D"!5Z]+ |&xUʨǿ{y݊l@L`16Øf[ARJƭVe2ɰd0 TA8ސ܂y ^2rd \8w(]A ^ v]73gs.Z5>6uk{6yɔlCy@g &yӴK D>w.R2Nn n C4 S>n wAtbQŻuP ڋݢݚ@3hmLpY1YًFjvzw'^Q ;=zqvEO'>?_>{m]:&N5+)Y$'C 1 ZAԦWzWL/ (Si#1$Gg qth ]"(gAx\<Yڂ<\\ʄz:⮖`1XJa}muFISdR*eR2P*$kb%qWCn#lJKi-AXK-1(/J\ 5% ͈:e| %=4oT#΄*sE"Pb춌"b0DJdbk8P27Q*YI1i ؃**,(Z(*# .Jj| J89YE\PMR M4nPR1(1Hx]jNKN;dAB30t79x@@&&!ޭʼnoӻ-V!Љ}F]0mTk--л5gL5nJk䨌A`3*qTvniGetv(Gey]T`xFЖH>;leqc(黿n>kMl8άf&vdm97醴{fCBa/f5H.$UжM%j}fQvn}4>UJZ4J!l1Iֶ7e}Weh:xOow)s6bd+4NjR.? ^nO{GoqagM|)0h,ggIoQ }~wQB:yxQ 1.5HLN6 1h`iCrXTuHq.#."."."౫e "A8ψ׹;IaUB wk: ӏ1r/̯,"*N?q o :h"iO*t;ERtV庂 <P;@ aT+Gtb .. 8obVUЭ*MYgJc_(kj_%6tUzy4X_aiԼCE5/Ő4xm)o*İB 5%$ٷvV3kŶeg`4߮y﯊sЙj_ i~5Q~.9БdHe ޮrDqm˿WGkOѼ5WIJsܯ;Яɂ5%w/mx G ;#ȅ>s0 ub:x@,?khRif7-ʛ,| Fucfאּ0Eafdd<1q,'*w1(c*ʈ<̓V1A̋jDUOp>~j3:ǵ! "JŮZF)4T .(=k5EI.'MTTK..vy4,qq4P~je?gD/;DoN5)ҳFiN{RӜj!.3Gi(4{(DuJ,һQ%<7\f'MTT3}D5J$q\ԥeox(LuJ,WWQxJfD&UQՇyT0v(Mϩ JT"IT4TK-/QHqO.LuJd=7JHC)9>; n_ZP~AyAJO PZP.QiJ9ERhTvQ8at']7ۯ+l1N*uԏj~f˥i;u9t鴺qkOGs{._PvZTKqjra-Զ~4F<ຆYȮNtePhױT+ +KR("x]Tܧizy$AylzhDZ*=-%_˃S8T3 +NdsW2=ڑe ^– ۏ\Z21ߜnIO<䬧~ܤD *q"Q,A噣&Zk,F/­+%Q`,%"IV<17lCIϓ:>\ Ն`dƵYNXb>YOY^ 2YӯؘxEO(Tdq?i LЯFJkfQԭs[CVcI2i 6]wImlYkCfgCf5˭^ՊiU((FļD:[p#O:|$y -o" ]tXT 'V ~vIs-P98Yjwgn* #I*ߩ82qGTі)g #K.)yyyyWFd2V ]!w3fr+sEF8U9Y#;CxƥRy)7}QFM_3PJX;tM%X,K,15'&2iy)Q[CYn4Wx!X^z׬Jˣm7f ȸ:C 7]x .uh D e1ZY 2r+bLųf`  y VFi9&2#$^ef ,GE7LQ "DUf @:/cCqrRTj䒧P(;pk tҼ=E[6{~kP*Yjk/c + h!k^n>g B2I$lbQB⊃+,:7(f2'QU6/z׼wC9Ǎ4TH[kڷtc.e5`.s;Mdh1@+x? OSDqB =QQ8; &D%]M6>0n[z{:cR78¼Qp'rՍ$$xT&n!$;0iQj)ɥyGudv$vyk \9[ú>Ȃwoܧ?>6Wݤ:e%aв'"gʼ 9q!31zmߊV6J,b<7tP>sԺ ZfhLMZu\/7f;s] *jxWsa] 3p9Ȗ)pmJA&I#{}IͧBa()tg3RJ+S]:NEV{cszo)^Sf9lՀ3Ha'Z-*nOގ=~zX}%!'{?|6̋Mg|ME ɕw_|?|0x}fW_vfO >2Sط&k%zM]pUR fVqbïH4Fl˙6@tЮy@G$x1*Ef)veA2L87hI@7j-AAO*x38aN3鸍lkEz 7vQނ6HpTg rki䀼UFO焫gPպVkЋ{Zz w8z;^SL: Z< ?% N\Arc.C.ra5\EO+|f;PKEۘ?41ao2HC)X\@&Ua]z(M+eh:\rOWV#JӲ0⸞J7QݯR4 Jܐױw|'o_Z蒝rA942JkP$iZ$*G%f3[glƵ! 3) Ԋ܃M \QH(9{)H+o>.ğy~2`K׋P^Ӽd)]?߽!WQORZZ-gAk6z*x ?YڠxވEf@G "b}Ϯ坯GWd2p_i +DRmA$;V7T5#!l/ۣX{k!YJ6[CrUA4U=j@jD%szNNRAU2Pֻe41CW8 l(J\t_77BO*ykA,(I5nX8 T+^5n1ԔYIDO5%DҢU2 RRZ*tO)($<$b!0 n3ysY`!1BP5I*M @dx.r`,EY3L؜3\p-%hV2 ?_Y~;խuzV_~|A̮U/G{n6,Lw`Q k2m)됳ȫRH6Viy-l?dA HAЗ-Q ċTA1B*75 )J5m[Yh(^(Eʼnj s]UoLJ(d%[~zV~^}[~i Û]D τ7CWxwk?'0lO3\D_l]cVMy٧ łqɜw<]ew]GsnՆ'F 0zWѽ7lg:S `/"Ǘ/]ûF~.hLur!isI%H2-NLd0\ <ĺiź4vਓrh6@+$C뽖sxvsC@sB)uqwϏi< [ׁIk#bnҧ{y#"8f=( NKpIa Vw B*/ƅڸaպbƍdqU{m.\w,6ڸeOcʧqߘ=vUwС) j<8Q9~'xq{DiRuwFjpOBkw!E>oNJw]%7lPܐ7쪒K2\8PPtwwYo|w4Y#q 5A =q7_M^4na٥]dj~sS o-q=ZviF"1IoJ񋦤6.ecFbF2Ǔm.ЩC'_^!ɉOWO(4D9ͥ0U>dIsF cղ|CI.(ϥc,DΎM An?B |%ER*%R;qˍ&qp:Zqx!h澠vwy4W(-(N). H ^@{iеrU#ĭGѦ4yAdʐ䅯9aXf–UfJ-Ͳ-I {)ۼhs =CD)mJJ*<E0J,kKP"YBVUFWh2ނQ(p|!0La ]/匑B 9\TB!/t5e&JZkL#/k: ~oZ}5R/GJy<p2yoZLct:yQKw-Qi c ۫K.4?cIl`By#TOrAZ '…F8W űezkBygqE1%;Byñ؋P^zp׀ZpX-=j DKiL7 |PwPD3% ^Kgq#=Gn[u|{Hĩ~0J"Z)co˙#;׋ %k;jAJՒIIb"gŕ㴾[۷J!Pq y.S4 =D z'5s:Tr*ZT]4ŧ)qnE7bjyN7bۘZ-@`tC^)!@ ޱZ^!7QYpVखכZ^\~FHp^\.6'yߞ-oY˪gO7T3#ow;tY-۳uC\waK 8ꩴ)`{9NDZoa&sȁ9SEQf 3p@V!T "-U֦LT.2Y vM1l ̗#c۳㎽M6t#Ru΂s 5 :yq{)0/!Ix!/VKGn䥨 HNKY}ѴZk)N^z^ur祦?i0a^J4楄a^Zq!O^z^Yм?JBOK[}ѴZ8KKy)?kIx!/Vr楜yY|͎0/]ZMcҰ'R!$4im_55(Piej IU`_x7nui݀?[kOrj ߘ|j5C!>%BT !j4\ Tw8Vjv>~m}B\30 S|u28ӯ*.B?Oq ~?~r HmQWXCB w>j8N*!_29-]z)㛰t?uuWv{{G{YH̊*֘ A%7Zi^ U*x#ZU|~>[x]M_نw??ئew@W @ʬ&ĚƉ`"J"$Q\ -eeM2YơroT+=ӛ!=w#=Dޗn|nMM al@bl~XJ*2sp <b PU2Y$dQp.Ҿ~a2M@aPqY5)x%r V:M\KC4 pӱ*W.)R0мYc}*ܙD ]S,ϵ$Z,{Y,{ \ E kV,{ lO.`WŲW4"0>tR-#e?]/LRUeM- 5w3о7-=#G80x8糧8l91Fmqw\-5Z$2Ae<#@Xj0` c7Ȅ(/ j>eS*la~e޷BN)'6 7w/M U0ÛẢ1f:#p_w$@΁cot:IkR]<%V;wp{k|?=P;hzOw$[J1@ N\O.ȵ}w?=+6[?ҩ[KavPzswHjM#B7{.4` `s>͡ u77 p냹H5;]goLvVPRrFA{ڌG8[0q/ ə^}uVǦP5ڗ߈): `Y=YWI>$&Q|:IFLtmOuCEօrM)m&?܊nj"#FT B\'1mU5-bL`tC^6|*0m!)'zꋦq'x0 _8>轻qof׷7wcUdMvٛ7W]7o7/cR9Vim@إEsܟgt6tаeeߊ+;__Sxɭ~~s{]9h1l﫫-h_T'MB1xNmT;heM˖kR&lSN^ZɀtSNW5,,t0$4 u}glR7{7UfŻEvگ}*<9goζ)46  uҊ_(/)AMk74 Ө5“H-*ϋ< &bNJ-%7fv? k;I3zݰs ݄1xg?{gnM[=F[/6˜Fc/[kc2idF΂}8hh:]qdjJmpPGt-~:p4Y4i.ɔ2j oKuJLheQ,5*/VDLYaPS%tQ OҔXUq~ Wa b\z Bcs5O<$+TT27*XqG :$nJLZD[΄* ^[@Rڼ˙ qӷx |AZ n]l o⒀'4D9l̎ V , =ٝ"ؾFNr]wdSR~є֒)aZ2%ὒ)?$) y̵3{i|jp6?Փr&}p\CV&imF&W,kҥ]Kn+K :cG#H]v{lhWAn.{kl$.qT;<{x ;٠6omF4 u)'a#(p>Ku|΅4/*Bbs]qo=|, Ζ@A?{Wȑ 9#"a4kӋֆ=ƼѨӒ[-i)=}3X3˪ 4LPbg-.[f"2J4s`2U:X1 (HLlgHS"LTR"ߚAsR/RKF@q-PuKq(='*5R1/5 "uCi!5R΄:vވ0zYlQJJ)(%^VF.uQ 800JmN&pBQʕJyQ0f|PZH d:?9Jq]BD):S/j&5 wad%k%rJi4L$J"R$9ђa,1O"\'*')tN %8A[jMHC]Ifi~ j/f +(: pT.kt]}>6G?{w8oJ>(BD6늼Ș&?_W8A3ύ7&s*+ӹ̤ȔT)f;SNIk-˚KcwQDG*,ܬ:d[k8'nD_)[ $mQrLc=^MbfzO %i΍iԲC|{Hh_ͯ(I} lcߛtwyԔF\`U9v#$)T\ $Ku!," 3jD)dJ(S|UO,i9;$`m2 ØtxpnS3bpJ##4M(yS<$4'@XHYqQtn7漕,_eFC7۶1YIn}>5vuAnYELpXv%i3b,4",inK&$Q*PjB)q^*ijvt!.!\4bM#J#̊Feq$Zq29i!@3TBȉ"%K~`˄|;BܦmYYUVQWZ 4ϟ/xW7l\ f_ 2$Յq*"=x;{5v;ߞ} JjB_n7[Guy02K" ^dQ3ϲ;<8F5<.,fbkNu~NGjʂhZ b3'.f߶Hh̾yujGx`XcOwYt>Bz8z/AH) :yv}8e ʦɻ%|SnsSjEhVyMx <^hX|.xntWuu$p$dZxw rE9ӫs432#3%[SyCT-'1t32@5k'.>Orc}ir|XNOa}y9āg-Xwf3٦uV<.F#f~f)6*4/d]V !*އfO @|]ɭ*I:]r}穋 ?Rx/*>acqMOnٽӂM$xB"kfZ֗ZV5rhF},S3<ӲtW. $aXHMy)-,+ӶPbljoOgl,۬Cb_EY:]~Mf=M747wj{ˀ}={N.!_l/'$z\gӿڴ4?$-V /l!L^\b|^Us%Sn.Bh0fjƱѣw Ձ tB:ڙw EsnM C4SB cTKO'˻ugC$j y|WB8Dkc @ p;z>Wf G %uD] SPDȦ?ܭ3K%;["#=cxrϙ DXzZe͈zq^?\{GTfj| Lo:y(@{%n,z>oW!JTSD]`@+nO3)Fj'o>{|]oEٷ8 [Td)K<:::N'{Wɺ́<OKr"Ԅgkd(KJ%z?JXA}`z`7nVO7'7H|Ym߯JEvSno=6UGs8!m n8 c `=)U 8{c%(B1t1a-85r7S9E e0H.JIF/Ёw$ݗIYҁu)&(]ۡQB:q,c]CQ.I9{v],DKcMLj;d?ى}`$Ғ1jӹR53!Kͺ()mrF]2`+f)pN9}:r)_qu?z79;^!p(ߢW36itNqv]QO6fi?SB/P*e/yDZ~G:M/s[0 L|aK'UAvHȩ$<pJRu2 "4 G{`YՀfgiSqWۢ8?iՌ^ !:DGҤ MDF<8M03!!QX4W<(q Ez Ո &+Rx4Lǜng QfA1!"8NVJiQEMIcab3K0H|1XU G 3 1&DB(JgzH2|LXLk8ՙѪc3,@0Mdg{;\ٔyM8=V~j͓;|ݰ%ڧVL-#+͹wx$ 5X1NXK,m˵Rq~\j C G:/%UjgYZ12P0Ç|)Ek|3|HB8D0b˔CK[\SjRhۻ9z&!yPM#w Ձ tBzBބ;z&!ZƔt$@[C^Jnee}E1׏Wz湍ˊ͏fg⾾Ԋh-lwU@29Q;lR62GR#p5t(Rn#ԭ0J9Zø&P ؛isR/Rs]y?t\(՗iƷ쌯 ^VDO(7JpC) Ƅ`P,֥& 8/*"C]WԽ%zn:ġYhiзY6J颫w$ 7ɹD&aߡIହP=td'&o-1MO|a5UWѝF"*N<#1C >YDr1?Mq\^" {Ө2*9hqPe ]ւQdq4|٭uXm }dh58PZwTi˝bP:Oo+)|(ޤ7R%vj;M^h^Z F[:v@.G׍aYVR9r17ϗd۸/_6 HD, #y2Foͯ;D~`"C$9"4,א+DhY9CJ0B&rMM &0jMPn u >E){};J!` PMRiX8'*O)g` qn`E"Od'23T` tKN9qN{K0T(a <:WPrM,7ﺘXFyV?&FQF#7WjWnYEL(Z!ԩ` T 0s#蹙f#4KKM4сBHP%Ҏ]npO\`ù}ܦ1ĈYH$];򘒚 KA!;kb Y*oTPUi3MARy։k,Åt%RV*okE)  b0ʘGgd=PzQGхRk)Im̼JH"KH籍I$o43!) 4E>?$uqŌGpUWZv?֘O;K?wۗ/[-3%t5Gq&j(Ll6w:@ wt\5jdU.urKw-mZ|Mbj}8pPRdK3]NW+5tO8Cac?e&(j`sFJ񈦪; 9FKm|EMˉjr.QU3@4,TZ+,]0*m8Y,VPJZ!jg{lZ, Y`&FdхAZZg!daP XI) M*+s֎19(l{#WC1aZ_^kR^}ݒ3 ^{ 3Xe$7# R^W$F+i1 SQcdd|Pě1wB%,+u+ʀrJRЀߑ*%;B( U+zX0J5 WB|Bpz2&\8@A"J'wB(u<Us"' ,T*JёBop}~AX3V9:jxޔk4DNX1"5vZ @%(c۵o]']K90RZH05ѩ>Dv@>Xji:RҲ;V&ǂ=Cq.Xǭ}/f@DѤ&X#qo6y`!xzRё#{s%U-'4I*ɠ~6w]uoBɇV: L $823Y}cdbUbz1 bV r1 gd ^HGnG$#[r!S])Gqe7J~i&Q␙H/n}*p<쒡X"Ct' 2`ɹ$E3V_R!?o֨hM#+V `g׽$T\ $TP WZS]ſ>\>'T\p֍Lʖ1$%dnľ[29_S8)M8gfLs olNalTeXN_26$0o2/%*V N^`UE[p+9c,y+t1O8"3@Hy)NfMb+-c5E-x ?,O7<|"NECUN5hIR'sN![&"o6 FϫR%j~ϩI~;q.V~ǭeT[UjMA*;FXU $qjJ@JXv KvjX= 6cGЃRaXȇ> %a+bEQPb$*%:j݃1IR)x0VRSC8X pAUdܠ*}D")*kNHuQL0#Qấ [TI-𖤖 o F4CԄHHXA%VVUaUe|"Thyr`d@v2!PXh!uxP! B>mLFx͢q5 P{K*B~}@Og)>BM/./wbOE^=UH7M5d&X~b4h tzIpZȕl/.v%.잹&|o: yRlxODY!b~LrRt̃X`Cb ]w$UЮ#RP;{cJCO—])m y =a[w5CxX'Y@e%$\뎔;[MŮ )0,ôW4?"0LVK&}.j jT>>#g6FCS䏁"|Ѭb=~v7a얉ATd>#)M7>Ud}-6!'91凌qe7N,nĄN3R۔Nx-v 2!';1y5v 3 cK41w8HbV̡/wqC)ڶNL=Zĉt !:&K0Nc"DxH@pUJYH )Hl{WDGTr-UO\.*gnZT>-/ Bӽ|{ee >Z P>i:p|;0';A݈F7b-U]S$0SM8PT.yo/%pI&G85 3:)ʉhݨsֳ_k}f?ߜ3\0Q\~qgMy*lŏU|g.|Yjnh=1Dc[9G"q0;FP!tD|RJ8A)W~ ~Z,QƁhg<ȼ fsL&'Rq3CY|̃?8{N19n!sܶs76 '{tj(.sRMZsG)/=|ky_~|(6ja<zsBY\B ,1ht~ NI/<1Q+(adVQc4CڒR) X$dU(C1< x2M豊ldTؽ_[x+$E)s 5IҦsFPbrA;UށՆw Vށǿ:d1ցDϡ&#tO/< GfQN;:G;gV7n_5W0TC)H0,(c3T (3,FTjҬz.;Ry~4׺$l#KRKD ɂ74`P?F%/2eXQFtҺrZT`< -)q:6кP3[י#,_}@wڨ@TD;_u`Ihʻ\d5#v={s7ۇq+?}\n}JKwA5{*^l_Qq?Fò}87բ)} ¿gxG{,Ϭ? ]G&=H}Λ~W@1\  ƂۢˢUʼnq[j ‘iE),*^KQت,Cjj^GTp~M_$o욟_~_S2&eƖ9#>d;瀟4Ԫ7bDlan4>e`w=OUo @6 F+ B~Qr/܍sijûptunRP3=*LARjt@?ꣽ_~mGU:)YigW1=2d)fOnFrv |;EnxzT)!xĀOՂaR BrdURWRjq+#y@n_?rQ3ވYU&QꈯEMTrzj>²}*>th9~=_aZp=pH (4ݓt )=$#ۋe'"&vVH OږE-*m4DKJBuān߉HP{.^)C6`( y6TiC+9S SNUq,*l4!XZ *`TM9M5tU>m[W9PX35~]N\?}ty^}Up0|l$PK mx 2{1ýAX[8( FRt`@QY DmoHpݒ"Sl[ro*i{{#,hvDžAQR!)pVtV+JG),ՄY 4JQc;%7X@'C417f^W l(Soո/_?=[?_=~~)0YS8;% {-?w'#<}'R\O(8K&k //ϛDv: hE$ۻ !f?{{+5"ϏB"?_Gރ7E {5}G$#7awF0uh.6퓎c73I/x<׋i.J]7~*[(_9n"uKQk2NZ Of['Ygb$.`Fgĥ%h"R}AO4v_=$ l@F'"A`K #j"ٶQ lTžabq:bI(ұ.;kh0{&:{{VSګuD_oݢ"NVs&}6aժ$B\D^7+amXO˻[_l:՟k[y.ĊWC1T$;:-}x|n^޸W~HtMV>m2`]֣u3Ɲ[=ʾڑ6f gCUÕM"C4+{x -}FJvn}*-En}DȉhVLqs/ih0b:HnS"TM"[r!Sgq),/%뙸;n%;$֑/ K<E#ٙ4i K!?Pcd4gOYN9' xΝ;$< lD3)s% l\ }T lݏn=y?ŽO~O?^.w_ֵOQe}i/ǐyp"Ou!>Ǹ < 9D hgL_wOTOFyϐ%CQΝ'3Z 1Oku.؝ouS_cf*.JHGN3+)o!#V- U6lj`ޕFr#dGa/ڷ;0i $ɤ[=j`IjY̫V*311 (嬅vš-4m袒 "]@%iTJ[΁{/dpC#5ۿ^F'^wԳҎz*QIԊ[#"u/*.>:ǸC:xb :nPfP,x:dHG Vݺp6hlQ6ZJwB L\~!S|Y" Sec3+ y0PVK1`5`ZnjеkCdc10ep*V}ZC2w͈k2r1]!#cԴi9H2 ykK$cϱ~blDe$gyVԔ.pJL=,`zw4]`6So<AޜI})5TG'}2eH. <*m$7&d@f r%hUd_)l|ϟEg+d- cI})5>|v'1%"Deq@e9 N8NҼV,VozTi;~ާ" Z<dD3UmE\ըܠYֱT#};q<5S&t3e'} SD3YODTh-z!>ﴳI܊, 4h\BS-q0:lZ_nB Vd?XZ\5־ "c'|(̈ F~FIuJQ?RCZjQ(Tϭ)]P*.XV^KyѰrSjOPa]= rSgҼU}>E4oUQj%z(=EC)ueXJR;ϝNq;^F̈yQ*/XVOϭwHjR_nJ}i*ĐTG5;\}{ #ӝ|f1 0Bb^{ rvwfpD$' AKn#{X\X!X *mk6'@< މ7{JuՋ[5(@.9܀x"42~48#2Оo,X*** oe0F-k#@l=(/[ol5:Y^J/- |4-px)ZB<'Ż*`F=m?̩6p9*|:xg"@A*N#g VHegYgBY*acYR-:X--L-/tj^ = ȿ^'Ewێ0C%(EzE?Du{A)5Q|2&6r..nWvj #,s ty#g%ZJcK>޳d%?<=w1]rE^$ÝΊcOs棂<6PA+D9(˳QÓOik~qd&Z DzP[{[@xCA)Tao[hKGJ]{!5fJ+HczqU?.RrA -cTAq Ij!{3 ٬%{t xMWd 7*Li8 Kn:ȁN7Jی`2VD2ӻ 7*L)y˻I1wAtjQf,zȦl*czkmq)H-'y·sް8e2oV|;c#_Y>^wAۗ5ѓJ U[)_OcUn!lln&Yh416 r_QPb!o!YqX)لǓ 6j~ y>!ᢉ]A mqi}tyOF[ߤY&a.` Oi">aA Ӈ-z53Q'Ƴ + Kb{m;h6W7/td<lj"P8,Hd6s}g +|w탱?}9|lxd b.h?KWNnv]!vkMX[棾{e@foR"}`ok3FqvԷ? V0ߨіϷVtYA7 #k HXdm(zSR_/w,_Y5x%ζ(Eblʳn>ϊ2P1 γK(佴"kHܞYؼӦSZ ^ԝ\ ;F%OdsGqo#h:o[Z"hiƹ1,ݽx}VTP3NR[MSw #od(C:*Xl}V1hK1FjֈYCcCZ )XE.V&<$RJA$"Ɉ zXARSVE`:I$a )˼a(}PH7ٕV7!12%d 9.V[.D+@a,Cm7["ŋ.**"KP1#"L}?sZ0l,Ŋ~^GV-)0mZhдLZ!C'x")PLMZF`d#P.R @tQwM@=O L9M:#\l/q%әLHΔ` re[+RZ#\Sv)oj2RF E 1Nsy@ΉQ:*#lJx}C>ZJQ Z2W(n0QoP5 TCp蝴B^JjM_#Q+Lj9 =nr_QafyGu̖fpIh![jzƣה).=*/y{dzx2kqwGﴳ٣܊=Ȣ?5qR4>2wԈ0%S1Qk3SWrKjcq nkuE%n+ɦV/d6!oU2zw4h](")/Ӡtޭf.Iy”`RY=(" $Z1T \!z4Ciq>cB!]2?CzҍєRYf:T /{3_^)$y9>0oJj!ޔ7MK6zyۜ6I})5pe6N\Sx鹵YJK})52g4JC) "P!SjXI%z=ݓc=mB38 ][9x 8+|zx+|bzmMm1h&Ǜﯴ`Ŧխn4Ww#phڷ7M]ym N@yRFP.\cC_g5b{T VYt~vBKAA:nd9J.ã7PyA74בͣ4UHJjv}6<58ɾ.~0h#T5sQa`RW=ؕ|{቏i~ʢA֢klrnG)JtP÷H5om{Yi]$_P5pOR(hiv(ـ 9o?\Lθ˗+&:viM8(:ã\yX1ˡ*x N"47߾/Wd`.= f>ﴇFA0rL5~(aFy$( eM|B*xJ8#8ۖRFkmH Mۀgkl6y9^ *QH$[栗e$SP3(K Hp=]VukMglkDٞU>'(;HN{Z\2z6RW,mSD7*' 9H4' 0CS ޙ+j4ۚᛄ;>Fkr>=P̂":6ɗb<Ҕ,D|"zŸ=])3<Is_ydHȈXFt& Ra'YWAQ]3yqZ"F?6}lJY?Et}Й%ݙ4Tgb~htd?+A VbKd0淴>t+k᫆U|UE//ΫwV8 C\ C!1gY 0W;G.[|Hzo 8hyŕ ¤@H4 b0FH C窸mA_ 9Nㇳ<-Ȁ)n$'/Jݜ E^,"8 S+m0s'$NzJrO9"sQvHڐ8gBl?f@זx\gQI_8CW 덒u%~ vZyo~^^_vVnYBbl 9hY=l] Fͣ~?Jk8OT#= :=()I*`;#*̜?b CD/O챪D6GaEDï= ')53o>o TG eQ8~9bb`BC-=%8i c^'PQWЗ.s[+tX]%񓫿Bh/'g?>>:p@{'HYBf텋tRc(Ul~!C}a&'q>=I2\&ƀ#xc(`հcz4M|m06/[l3 ډfi^! (A۾V&";W^# c W$ks 0}D77.֖Fg@SE]vG% (NI  RN02&# qUq_* PՊQG/l$EO&2vVkQ ^U||\^_Jz',A4FYGO.>ъ\\֮1*o>?pXd6fr%N]`N`Vcn> 4ն(LvQvhPtAɁIQC.2EYy%+AȐmZC]ǫw!_k#uI,&a?$I 3Pw"ܹ6Z-ס]^G̿鸪%"ghr {4tЋIXjR`Spލ&vohȑ|v8efIXbA8vAy[\Zj] r#cRڍ ncCT`JhX!] {)sk\R?5L gP a4MV[(?ZHDRT3._hn)%j5A7kC+6-[Tv $u28^v&d\Q݊Kκ١v|RDZR$,;lH͛tYf+m!Y#%c;Y38 wώxy!r>i[BѲAq]zx"1胣ٴ^&'p-E+$8*rFrIJFD+9-/{ޕߌ"/XFD!B‰͸PV3f/rIJ 0ps04V!͖Ϳ{C4]lPơ6qOKsG{pmxi>-+gߗԛ.{Eb1}70efphqy]/ݨ+ޕSX\0QR>[v%L$~}߆dulުoK{W\\o 3vJrK A{Z9V?-z'k֘T/sk;[EVҞ"(0FOQo&x *~F4uפf2KX)DI܈DE0T"ho!RGsd14`, g(o)1s(,0*4 F4k} T BUǿ}G 0&MKE 5j܎gj_{ͪ+OeRlRRA騢YeB*ШLD&v ʟ^)&œI h1풔UV7 BYl'ٻ7w'_|q/q ttzɪ8n{qz{2iy}7,6l'r2^zEe*RN1d1>F$?'ԻLe`V}ޫti*^ Hp2HWAYuiIFg9~xzVn Mr/c|cJZV،{Ôk[k+uk%] Ibf&x-tf>Ma%bdXwq0OW7׬ae/~?|D1vKmmK,׌!A^ѭ kmIq/^L LRwŔE! 1ϭhvSiyVg!9 $/%b?2:o3j^;TwǢ pVl,u_hMZrP/7M?A\^+~jI}f ѫh\>xm\קT)W?>n% ƨ7Zi m] z-oH} :lEYVt("~vvsx.I{:Hᗨ`f=T. maCMJh5f1H.J.d+Ā0hׄ6 +Kgh7ЦM0rI2ѻFÞĦ X$.R@9}@ ++۬ 1z=1UǮN4>4 ; '{'`0U3h?3m_6ZE0ej[ vrj*|fU`L*Lzx_ëR5L[7ӽk^6Ө6J3[^]n?׻DR Lу:55)Xm ݫy{< H0- ĈGV*;T4@pPqfJIר:I3C-7PP[jakP DQr@c HK&7"{#uEi 8^otgtrv3ێ=|N/;? 2i2\,}JhKg ~07˩ _t}Rr6'aV*DLS6%8rZ>pO|1ErR}'h-gŗ¼!ځ,䅛h'2gfk'n]1H1hN}qäͻuOEKnn]lkvwfлu tw;*˜Zޭ{v~w;pݴ)Flf#E`c)-R,Jyߧsx\ҷے*R~=V"7^v% 5*QlTRvX)ZR)m:(!WSP P! N]%8~ɤu T8DaP+ x`! u ܆ 0RL뱪d84Ų4it \7?I'((Ο.IE{sr(P|X^s"z?a5Њ>;֭靀Hz.O/TU^ؼa[9=vzrr? m( xdX֡5˵"QrfXRWHrU KLCM!'kw6SvRng(m >AwqTОKY8 ^F Gy2pY(LndjCN0tpuq0ʃcEȣ6^V9dJF%Ri\:AQEFQO~vrۆ @=y2F G] Ć sAy+!pFG8gBy-+*~!TW92I>T$(e=C(P UBC'j qW4r fݞE}hCRj)"iOiCսm6j(4ǥ\rq]Ba8i%EsA-ƿ 7?jO"qy\b, $FⳘly eqUy>x^Ѭ_-)0eƏu6wwvW??DͦXpr+pR Ȓ*seʸ{_voW~rf8Bj*w 9'0lHC4<' m ؽ啰lq=_&_E P͖.J\=q(ْVz:`x%|li'܌<0ݤE|AQs6L߾W~;$uef6U :0var/home/core/zuul-output/logs/kubelet.log0000644000000000000000005271727515140060305017707 0ustar rootrootFeb 02 06:46:13 crc systemd[1]: Starting Kubernetes Kubelet... Feb 02 06:46:13 crc restorecon[4814]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:13 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 02 06:46:14 crc restorecon[4814]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 02 06:46:15 crc kubenswrapper[4842]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.152161 4842 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157056 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157089 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157099 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157109 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157119 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157127 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157136 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157145 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157154 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157162 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157196 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157204 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157213 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157229 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157265 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157276 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157285 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157295 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157303 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157313 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157323 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157331 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157340 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157348 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157356 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157364 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157372 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157380 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157388 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157395 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157403 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157411 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157418 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157426 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157434 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157441 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157451 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157459 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157467 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157474 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157482 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157489 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157496 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157504 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157512 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157519 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157619 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157934 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157945 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157953 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157971 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.157980 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158087 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158096 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158105 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158120 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158129 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158137 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158152 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158162 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158186 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158194 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158202 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158210 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158224 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158252 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158261 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158269 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158284 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158294 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.158303 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158563 4842 flags.go:64] FLAG: --address="0.0.0.0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158581 4842 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158594 4842 flags.go:64] FLAG: --anonymous-auth="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158615 4842 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158626 4842 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158637 4842 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158648 4842 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158660 4842 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158670 4842 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158680 4842 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158691 4842 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158711 4842 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158721 4842 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158730 4842 flags.go:64] FLAG: --cgroup-root="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158740 4842 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158750 4842 flags.go:64] FLAG: --client-ca-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158760 4842 flags.go:64] FLAG: --cloud-config="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158769 4842 flags.go:64] FLAG: --cloud-provider="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158778 4842 flags.go:64] FLAG: --cluster-dns="[]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158790 4842 flags.go:64] FLAG: --cluster-domain="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158806 4842 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158816 4842 flags.go:64] FLAG: --config-dir="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158825 4842 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158835 4842 flags.go:64] FLAG: --container-log-max-files="5" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158848 4842 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158858 4842 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158868 4842 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158878 4842 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158894 4842 flags.go:64] FLAG: --contention-profiling="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158905 4842 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158915 4842 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158925 4842 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158934 4842 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158946 4842 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158955 4842 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158965 4842 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158975 4842 flags.go:64] FLAG: --enable-load-reader="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.158993 4842 flags.go:64] FLAG: --enable-server="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159002 4842 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159015 4842 flags.go:64] FLAG: --event-burst="100" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159025 4842 flags.go:64] FLAG: --event-qps="50" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159034 4842 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159044 4842 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159057 4842 flags.go:64] FLAG: --eviction-hard="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159068 4842 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159086 4842 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159096 4842 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159108 4842 flags.go:64] FLAG: --eviction-soft="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159117 4842 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159127 4842 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159136 4842 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159145 4842 flags.go:64] FLAG: --experimental-mounter-path="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159155 4842 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159165 4842 flags.go:64] FLAG: --fail-swap-on="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159344 4842 flags.go:64] FLAG: --feature-gates="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159405 4842 flags.go:64] FLAG: --file-check-frequency="20s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159426 4842 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159442 4842 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159456 4842 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159530 4842 flags.go:64] FLAG: --healthz-port="10248" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159544 4842 flags.go:64] FLAG: --help="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159554 4842 flags.go:64] FLAG: --hostname-override="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159564 4842 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159971 4842 flags.go:64] FLAG: --http-check-frequency="20s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.159996 4842 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160007 4842 flags.go:64] FLAG: --image-credential-provider-config="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160016 4842 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160025 4842 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160036 4842 flags.go:64] FLAG: --image-service-endpoint="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160049 4842 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160061 4842 flags.go:64] FLAG: --kube-api-burst="100" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160073 4842 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160086 4842 flags.go:64] FLAG: --kube-api-qps="50" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160097 4842 flags.go:64] FLAG: --kube-reserved="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160110 4842 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160121 4842 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160130 4842 flags.go:64] FLAG: --kubelet-cgroups="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160140 4842 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160149 4842 flags.go:64] FLAG: --lock-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160158 4842 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160168 4842 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160178 4842 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160195 4842 flags.go:64] FLAG: --log-json-split-stream="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160204 4842 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160213 4842 flags.go:64] FLAG: --log-text-split-stream="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160254 4842 flags.go:64] FLAG: --logging-format="text" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160264 4842 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160275 4842 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160284 4842 flags.go:64] FLAG: --manifest-url="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160293 4842 flags.go:64] FLAG: --manifest-url-header="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160309 4842 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160319 4842 flags.go:64] FLAG: --max-open-files="1000000" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160331 4842 flags.go:64] FLAG: --max-pods="110" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160340 4842 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160350 4842 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160361 4842 flags.go:64] FLAG: --memory-manager-policy="None" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160370 4842 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160380 4842 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160390 4842 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160400 4842 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160436 4842 flags.go:64] FLAG: --node-status-max-images="50" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160445 4842 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160455 4842 flags.go:64] FLAG: --oom-score-adj="-999" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160464 4842 flags.go:64] FLAG: --pod-cidr="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160473 4842 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160488 4842 flags.go:64] FLAG: --pod-manifest-path="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160497 4842 flags.go:64] FLAG: --pod-max-pids="-1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160506 4842 flags.go:64] FLAG: --pods-per-core="0" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160515 4842 flags.go:64] FLAG: --port="10250" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160524 4842 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160533 4842 flags.go:64] FLAG: --provider-id="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160542 4842 flags.go:64] FLAG: --qos-reserved="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160551 4842 flags.go:64] FLAG: --read-only-port="10255" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160560 4842 flags.go:64] FLAG: --register-node="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160569 4842 flags.go:64] FLAG: --register-schedulable="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160580 4842 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160598 4842 flags.go:64] FLAG: --registry-burst="10" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160608 4842 flags.go:64] FLAG: --registry-qps="5" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160617 4842 flags.go:64] FLAG: --reserved-cpus="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160626 4842 flags.go:64] FLAG: --reserved-memory="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160638 4842 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160647 4842 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160657 4842 flags.go:64] FLAG: --rotate-certificates="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160666 4842 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160675 4842 flags.go:64] FLAG: --runonce="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160684 4842 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160693 4842 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160703 4842 flags.go:64] FLAG: --seccomp-default="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160711 4842 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160720 4842 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160730 4842 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160739 4842 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160748 4842 flags.go:64] FLAG: --storage-driver-password="root" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160757 4842 flags.go:64] FLAG: --storage-driver-secure="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160766 4842 flags.go:64] FLAG: --storage-driver-table="stats" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160776 4842 flags.go:64] FLAG: --storage-driver-user="root" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160785 4842 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160794 4842 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160803 4842 flags.go:64] FLAG: --system-cgroups="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160812 4842 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160829 4842 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160838 4842 flags.go:64] FLAG: --tls-cert-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160847 4842 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160864 4842 flags.go:64] FLAG: --tls-min-version="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160874 4842 flags.go:64] FLAG: --tls-private-key-file="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160883 4842 flags.go:64] FLAG: --topology-manager-policy="none" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160892 4842 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160901 4842 flags.go:64] FLAG: --topology-manager-scope="container" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160910 4842 flags.go:64] FLAG: --v="2" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160930 4842 flags.go:64] FLAG: --version="false" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160943 4842 flags.go:64] FLAG: --vmodule="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160954 4842 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.160965 4842 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161286 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161300 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161309 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161317 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161328 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161336 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161344 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161351 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161359 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161367 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161374 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161382 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161389 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161397 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161407 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161419 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161429 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161440 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161450 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161460 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161474 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161487 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161498 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161507 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161515 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161523 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161530 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161539 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161551 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161559 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161567 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161578 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161588 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161596 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161605 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161614 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161622 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161630 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161637 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161645 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161652 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161660 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161668 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161676 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161683 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161691 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161699 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161706 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161735 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161743 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161750 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161758 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161766 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161773 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161781 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161789 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161799 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161809 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161817 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161825 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161836 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161847 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161854 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161862 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161870 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161878 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161886 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161893 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161901 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161908 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.161915 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.161941 4842 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.174535 4842 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.174597 4842 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174742 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174764 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174775 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174789 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174801 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174812 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174821 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174830 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174839 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174848 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174857 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174865 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174873 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174881 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174888 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174897 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174905 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174912 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174920 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174928 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174937 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174946 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174954 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174962 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174971 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174979 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174988 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.174996 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175005 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175015 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175025 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175035 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175044 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175052 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175063 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175071 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175082 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175091 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175099 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175107 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175115 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175123 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175131 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175138 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175146 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175154 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175162 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175169 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175177 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175184 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175192 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175200 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175209 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175225 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175264 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175275 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175285 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175294 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175301 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175309 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175316 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175325 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175333 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175341 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175349 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175356 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175364 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175371 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175379 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175387 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.175397 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.175411 4842 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176642 4842 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176665 4842 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176675 4842 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176683 4842 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176692 4842 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176700 4842 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176709 4842 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176717 4842 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176726 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176738 4842 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176749 4842 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176758 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176767 4842 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176776 4842 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176785 4842 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176794 4842 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176803 4842 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176811 4842 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176819 4842 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176827 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176835 4842 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176842 4842 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176850 4842 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176857 4842 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176865 4842 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176873 4842 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176883 4842 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176893 4842 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176902 4842 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176910 4842 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176918 4842 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176926 4842 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176934 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176942 4842 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176952 4842 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176962 4842 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176970 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176978 4842 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176985 4842 feature_gate.go:330] unrecognized feature gate: Example Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.176994 4842 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177001 4842 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177009 4842 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177016 4842 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177024 4842 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177031 4842 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177039 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177046 4842 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177054 4842 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177062 4842 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177070 4842 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177081 4842 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177091 4842 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177100 4842 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177110 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177118 4842 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177127 4842 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177135 4842 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177143 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177152 4842 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177160 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177168 4842 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177176 4842 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177184 4842 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177191 4842 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177199 4842 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177206 4842 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177222 4842 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177255 4842 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177266 4842 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177275 4842 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.177284 4842 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.177297 4842 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.178540 4842 server.go:940] "Client rotation is on, will bootstrap in background" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.184581 4842 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.185433 4842 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.187462 4842 server.go:997] "Starting client certificate rotation" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.187511 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.188576 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-12 16:45:10.284367695 +0000 UTC Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.188727 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.217882 4842 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.221991 4842 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.223289 4842 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.247523 4842 log.go:25] "Validated CRI v1 runtime API" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.287964 4842 log.go:25] "Validated CRI v1 image API" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.290603 4842 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.298172 4842 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-02-06-36-55-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.298280 4842 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.330019 4842 manager.go:217] Machine: {Timestamp:2026-02-02 06:46:15.326431288 +0000 UTC m=+0.703699280 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a2d9b7d5-4deb-436c-8c47-643b2c87256c BootID:46282451-0a80-4a55-be60-279b5a40f455 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e3:ab:6e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e3:ab:6e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:29:42:54 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:51:e6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c4:6e:4b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:59:b4:49 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:3a:82:4c Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:da:29:a6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:fa:dc:c0:b5:f3:ef Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:d5:88:bc:b8:06 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.330533 4842 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.330727 4842 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.331180 4842 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.331603 4842 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.331672 4842 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332036 4842 topology_manager.go:138] "Creating topology manager with none policy" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332053 4842 container_manager_linux.go:303] "Creating device plugin manager" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332673 4842 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.332725 4842 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.333539 4842 state_mem.go:36] "Initialized new in-memory state store" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.333766 4842 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338668 4842 kubelet.go:418] "Attempting to sync node with API server" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338702 4842 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338742 4842 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338763 4842 kubelet.go:324] "Adding apiserver pod source" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.338783 4842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.343603 4842 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.344748 4842 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.345699 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.345842 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.345898 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.345954 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.347708 4842 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349724 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349775 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349806 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349828 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349868 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349889 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349904 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349926 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349943 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.349957 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.350001 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.350015 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.350958 4842 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.352096 4842 server.go:1280] "Started kubelet" Feb 02 06:46:15 crc systemd[1]: Started Kubernetes Kubelet. Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.353938 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.353547 4842 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.354114 4842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357112 4842 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357642 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357708 4842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357725 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:33:25.908262688 +0000 UTC Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357898 4842 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.357948 4842 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.358159 4842 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.358182 4842 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.363293 4842 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.363340 4842 factory.go:55] Registering systemd factory Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.363361 4842 factory.go:221] Registration of the systemd container factory successfully Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.364893 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.365290 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365576 4842 factory.go:153] Registering CRI-O factory Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.365413 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365616 4842 factory.go:221] Registration of the crio container factory successfully Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365788 4842 factory.go:103] Registering Raw factory Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.365823 4842 manager.go:1196] Started watching for new ooms in manager Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.366417 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18905b0f6c071ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,LastTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.374315 4842 manager.go:319] Starting recovery of all containers Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.374820 4842 server.go:460] "Adding debug handlers to kubelet server" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384502 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384603 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384630 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384652 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384672 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384695 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384759 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384793 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384821 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384841 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384860 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384880 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384904 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384937 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384969 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.384997 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385024 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385046 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385064 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385088 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385155 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385184 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385229 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385293 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385322 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385351 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385387 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385417 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385449 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385481 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385510 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385537 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385565 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385592 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385618 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385646 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385673 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385701 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385726 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385753 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385779 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385808 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385835 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385863 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385892 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385919 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385946 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.385973 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386001 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386032 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386058 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386084 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386118 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386146 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386176 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386208 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386277 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386310 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386335 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386361 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386390 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386417 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386446 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386473 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386502 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386529 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386555 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386584 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386613 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386643 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386672 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386699 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386726 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386754 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386778 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386803 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386830 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386857 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386885 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386912 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386939 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386962 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.386986 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387012 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387037 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387064 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387153 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387180 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387206 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387301 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387333 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387361 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387390 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387418 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387446 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387486 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387521 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387548 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387575 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387599 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387632 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387658 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387690 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387715 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387819 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387857 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387887 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387916 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387944 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.387973 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388003 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388030 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388060 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388090 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388116 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388153 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388180 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388206 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388278 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388309 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388333 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388355 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388378 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388405 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388431 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388457 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388482 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388506 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388533 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388572 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388596 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388635 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388663 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388686 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388714 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388738 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388761 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388795 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388826 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388853 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388877 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388901 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388928 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388951 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.388976 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389000 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389026 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389051 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389092 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389120 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389149 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389174 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389203 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389269 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389298 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389323 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389347 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389371 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389413 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389439 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389487 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389532 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389565 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389590 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389617 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389655 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389683 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389709 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389733 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389762 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389786 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389810 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.389848 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393400 4842 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393461 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393491 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393519 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393545 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393570 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393604 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393629 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393653 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393677 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393702 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393726 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393751 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393778 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393835 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393867 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393894 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393970 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.393996 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394034 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394059 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394085 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394118 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394158 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394185 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394225 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394319 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394345 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394373 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394397 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394422 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394460 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394488 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394518 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394542 4842 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394606 4842 reconstruct.go:97] "Volume reconstruction finished" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.394624 4842 reconciler.go:26] "Reconciler: start to sync state" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.412889 4842 manager.go:324] Recovery completed Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.428891 4842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.432126 4842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.432184 4842 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.432226 4842 kubelet.go:2335] "Starting kubelet main sync loop" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.432372 4842 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 02 06:46:15 crc kubenswrapper[4842]: W0202 06:46:15.434893 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.434994 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.436017 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.438399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.438443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.438457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.439880 4842 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.439925 4842 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.439962 4842 state_mem.go:36] "Initialized new in-memory state store" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.458933 4842 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.468528 4842 policy_none.go:49] "None policy: Start" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.469923 4842 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.469988 4842 state_mem.go:35] "Initializing new in-memory state store" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.532870 4842 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.543853 4842 manager.go:334] "Starting Device Plugin manager" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.543922 4842 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.543942 4842 server.go:79] "Starting device plugin registration server" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.544536 4842 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.544565 4842 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.544989 4842 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.545107 4842 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.545127 4842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.562402 4842 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.567397 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.645787 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.647919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.648020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.648041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.648112 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.649079 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.733742 4842 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.733895 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.735692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.735763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.735787 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.736025 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.736704 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737043 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.737930 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.738194 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.738324 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.739921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740000 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740019 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.739937 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740512 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740772 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.740833 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.741896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.741996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742383 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742777 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.742876 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.743315 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744845 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744886 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.744983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.745025 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.745049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.746284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.746344 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.746358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.800888 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801071 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801276 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801374 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801460 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801551 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801638 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801688 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801779 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.801962 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.802057 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.802146 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.802257 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.850024 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.852200 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.852884 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.903702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.903812 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.903978 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904032 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904021 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904209 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904285 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904280 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904554 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904627 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904725 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904748 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904834 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904913 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904927 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.904965 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905044 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905103 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905139 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905184 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905288 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905385 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905391 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905443 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905502 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905594 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905632 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905687 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905736 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: I0202 06:46:15.905858 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:15 crc kubenswrapper[4842]: E0202 06:46:15.968381 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.079907 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.111692 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.136352 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0 WatchSource:0}: Error finding container 5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0: Status 404 returned error can't find the container with id 5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0 Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.139948 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.163406 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.166917 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35 WatchSource:0}: Error finding container 9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35: Status 404 returned error can't find the container with id 9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35 Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.179044 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.186324 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396 WatchSource:0}: Error finding container 911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396: Status 404 returned error can't find the container with id 911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396 Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.206698 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5 WatchSource:0}: Error finding container 39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5: Status 404 returned error can't find the container with id 39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5 Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.253946 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256362 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.256436 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.257209 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.261391 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.261592 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.356513 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.358808 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:14:29.717778861 +0000 UTC Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.437989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5e8a1f67e76476fa64fe449d93c5909260e5813495076fa4636a20befed96cc0"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.439444 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"39b17e24da7a6df95f8f6cae05a233775a4d684345b7277358b4ba14b5cc25e5"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.440742 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"911e169d4d0263ff322625581adbbd5bb1b645fe9e3dab91baa8403eaddfe396"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.443996 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9d5cdb5a57df8569b9c795bd8148799ef4da980f44b7107759bc18c540551c35"} Feb 02 06:46:16 crc kubenswrapper[4842]: I0202 06:46:16.446413 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0443add00b8f4fb80a07e481f140e82798e6760a04afde71ce4c66bedae993fb"} Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.453246 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.453338 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.471483 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.471594 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: W0202 06:46:16.558121 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.558333 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:16 crc kubenswrapper[4842]: E0202 06:46:16.769038 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.057443 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.060517 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:17 crc kubenswrapper[4842]: E0202 06:46:17.061679 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.247131 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 06:46:17 crc kubenswrapper[4842]: E0202 06:46:17.249681 4842 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.355248 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.359334 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:09:11.099302836 +0000 UTC Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.453815 4842 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.454043 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.453962 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.458182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.458320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.458347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.465123 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.465259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.465296 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.467081 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.467204 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.467381 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.468151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.468179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.468191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471212 4842 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="618f6f6d52e9588bd7ddbd245c55dfef433902618db7d9aacf19b742debaba1d" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471333 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"618f6f6d52e9588bd7ddbd245c55dfef433902618db7d9aacf19b742debaba1d"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471441 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.471807 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.472837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.472864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.472877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.473741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.473797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.473816 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.484830 4842 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205" exitCode=0 Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.484898 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205"} Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.484981 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.486850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.486912 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:17 crc kubenswrapper[4842]: I0202 06:46:17.486934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: W0202 06:46:18.147521 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.147695 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.355353 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.359460 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:27:52.169052083 +0000 UTC Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.369693 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493717 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493777 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.493805 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.499859 4842 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ef728f328ecc7ea05eff1fe86deb439e0a78e677a87a42e0382395ad1b32e254" exitCode=0 Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.500160 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.500265 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ef728f328ecc7ea05eff1fe86deb439e0a78e677a87a42e0382395ad1b32e254"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.501616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.501650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.501662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.504948 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.505086 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.511015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.511057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.511067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.537626 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.537691 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.537704 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.538034 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.539192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.539260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.539274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.543435 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588"} Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.543577 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.546920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.546958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.546971 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: W0202 06:46:18.576267 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.576388 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.662776 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.663984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.664038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.664057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:18 crc kubenswrapper[4842]: I0202 06:46:18.664098 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.664697 4842 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.169:6443: connect: connection refused" node="crc" Feb 02 06:46:18 crc kubenswrapper[4842]: W0202 06:46:18.906846 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.169:6443: connect: connection refused Feb 02 06:46:18 crc kubenswrapper[4842]: E0202 06:46:18.906988 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.169:6443: connect: connection refused" logger="UnhandledError" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.360292 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:07:23.87655444 +0000 UTC Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.550647 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169"} Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.550916 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.552887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.552927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.552943 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554465 4842 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ff7d2c230b7ef8d5dae5a246f049192db6652d55aeae25115de2041dbb3be74" exitCode=0 Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554554 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554584 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554704 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554764 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554634 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ff7d2c230b7ef8d5dae5a246f049192db6652d55aeae25115de2041dbb3be74"} Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.554868 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556653 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556697 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.556869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557902 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:19 crc kubenswrapper[4842]: I0202 06:46:19.557911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.361060 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:24:11.316834399 +0000 UTC Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.423711 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567211 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ad7e16aa26380210f6e5a17aba39b2e15ff5b543a25247c7222f05c398888fbe"} Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567321 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"449d5b62df4e1db49847e3d77dc4ca3c70b573290bb19f9c56f6057a404b92bc"} Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f9c0153fa6a4621977051bc7520582c8f6ddba3cefc69852a44383b1d1dd0b87"} Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567521 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567545 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.567617 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569867 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.569965 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:20 crc kubenswrapper[4842]: I0202 06:46:20.647502 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.289071 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.361502 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:19:00.118779106 +0000 UTC Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.577887 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4f62f42ebc4afae27aa42966f04a4638ae38d0ef84da92504a0a303b56ffd69"} Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.577941 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af6f71282f78a0334feb7e8e7cd6fd7b9c4adf33d862bda0a4a0006cdf1702e3"} Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.577982 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.578052 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.578182 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579930 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579973 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.579988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.580021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.580066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.580088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.590767 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.865300 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867070 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867166 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.867201 4842 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 02 06:46:21 crc kubenswrapper[4842]: I0202 06:46:21.898267 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.362062 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:52:21.883507355 +0000 UTC Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.580945 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.580983 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.580945 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584012 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584068 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584317 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.584337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.585831 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.585869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:22 crc kubenswrapper[4842]: I0202 06:46:22.585889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.362539 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:59:13.713176361 +0000 UTC Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.362655 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.584610 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.586052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.586107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.586128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.724625 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.724927 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.727078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.727147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.727167 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:23 crc kubenswrapper[4842]: I0202 06:46:23.990944 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.289958 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.290079 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.362814 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 22:18:09.76516727 +0000 UTC Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.587982 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.589189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.589258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:24 crc kubenswrapper[4842]: I0202 06:46:24.589273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.363409 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:14:21.390360314 +0000 UTC Feb 02 06:46:25 crc kubenswrapper[4842]: E0202 06:46:25.563517 4842 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.604155 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.604587 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.606931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.607046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.607076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:25 crc kubenswrapper[4842]: I0202 06:46:25.617690 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.165581 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.166013 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.168149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.168255 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.168276 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.364457 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:10:18.684816104 +0000 UTC Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.594753 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.596567 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.596624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.596639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:26 crc kubenswrapper[4842]: I0202 06:46:26.603342 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.365167 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:32:18.003117173 +0000 UTC Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.598764 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.600538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.600603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:27 crc kubenswrapper[4842]: I0202 06:46:27.600621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.332111 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.332183 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.365406 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:13:36.127478912 +0000 UTC Feb 02 06:46:28 crc kubenswrapper[4842]: W0202 06:46:28.981210 4842 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 02 06:46:28 crc kubenswrapper[4842]: I0202 06:46:28.981416 4842 trace.go:236] Trace[1376167792]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:18.979) (total time: 10002ms): Feb 02 06:46:28 crc kubenswrapper[4842]: Trace[1376167792]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (06:46:28.981) Feb 02 06:46:28 crc kubenswrapper[4842]: Trace[1376167792]: [10.002268589s] [10.002268589s] END Feb 02 06:46:28 crc kubenswrapper[4842]: E0202 06:46:28.981464 4842 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 02 06:46:29 crc kubenswrapper[4842]: E0202 06:46:29.278172 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18905b0f6c071ff5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,LastTimestamp:2026-02-02 06:46:15.351648245 +0000 UTC m=+0.728916197,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:46:29 crc kubenswrapper[4842]: I0202 06:46:29.355147 4842 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 02 06:46:29 crc kubenswrapper[4842]: I0202 06:46:29.365503 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:49:47.403220307 +0000 UTC Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.172028 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.172114 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.184928 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.185268 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.366172 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:59:22.645540078 +0000 UTC Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.657033 4842 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]log ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]etcd ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/priority-and-fairness-filter ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-apiextensions-informers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-apiextensions-controllers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/crd-informer-synced ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-system-namespaces-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 02 06:46:30 crc kubenswrapper[4842]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/bootstrap-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/start-kube-aggregator-informers ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-registration-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-discovery-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]autoregister-completion ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-openapi-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 02 06:46:30 crc kubenswrapper[4842]: livez check failed Feb 02 06:46:30 crc kubenswrapper[4842]: I0202 06:46:30.657096 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:46:31 crc kubenswrapper[4842]: I0202 06:46:31.367739 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:34:57.233600788 +0000 UTC Feb 02 06:46:32 crc kubenswrapper[4842]: I0202 06:46:32.368420 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:07:29.078074974 +0000 UTC Feb 02 06:46:32 crc kubenswrapper[4842]: I0202 06:46:32.735467 4842 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:33 crc kubenswrapper[4842]: I0202 06:46:33.369432 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:38:52.913162196 +0000 UTC Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.045086 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.045263 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.046725 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.046779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.046792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.070529 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.289847 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.289981 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.369808 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:39:37.98942513 +0000 UTC Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.616647 4842 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.618123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.618184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:34 crc kubenswrapper[4842]: I0202 06:46:34.618210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.184345 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.186526 4842 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.187981 4842 trace.go:236] Trace[1493835285]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:23.314) (total time: 11873ms): Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1493835285]: ---"Objects listed" error: 11873ms (06:46:35.187) Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1493835285]: [11.873418159s] [11.873418159s] END Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.188011 4842 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.188276 4842 trace.go:236] Trace[2102861008]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:24.179) (total time: 11009ms): Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[2102861008]: ---"Objects listed" error: 11009ms (06:46:35.188) Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[2102861008]: [11.009060498s] [11.009060498s] END Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.188294 4842 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.189856 4842 trace.go:236] Trace[1555826742]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (02-Feb-2026 06:46:22.338) (total time: 12851ms): Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1555826742]: ---"Objects listed" error: 12850ms (06:46:35.189) Feb 02 06:46:35 crc kubenswrapper[4842]: Trace[1555826742]: [12.851054054s] [12.851054054s] END Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.189904 4842 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.191008 4842 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.200020 4842 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.200556 4842 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202534 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.202597 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.225817 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.235848 4842 csr.go:261] certificate signing request csr-glph9 is approved, waiting to be issued Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237933 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.237970 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.251886 4842 csr.go:257] certificate signing request csr-glph9 is issued Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.251955 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256620 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256632 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.256676 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.269690 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274384 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274484 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274517 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.274537 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.286868 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.292986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.293144 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.307957 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.308119 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.309956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.310058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.352402 4842 apiserver.go:52] "Watching apiserver" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.356344 4842 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.356772 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357354 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357384 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357352 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357458 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357698 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.357771 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.357782 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.357492 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.358030 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.358671 4842 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.360414 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.361529 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.361830 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.361948 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.362093 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363274 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363498 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363655 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.363835 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.370071 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:29:45.908475176 +0000 UTC Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.386931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.387888 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.387891 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388005 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388033 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388118 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388142 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388165 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388189 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388272 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388295 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388318 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388340 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388363 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388436 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388459 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388484 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388545 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388571 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388595 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388620 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388660 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388682 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388696 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388704 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388771 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388798 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388822 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388848 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388871 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388894 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388943 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388952 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.388965 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389128 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389160 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389203 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389305 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389331 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389355 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389402 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389425 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389447 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389468 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389502 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389525 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389548 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389572 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389594 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389616 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389641 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389665 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389688 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389733 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389754 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389776 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389801 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389827 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389849 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389872 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389893 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389917 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389938 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389959 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389982 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390030 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390106 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390152 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390176 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390198 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390240 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390263 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390285 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390310 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390333 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390356 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390425 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390447 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390468 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389177 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389278 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389471 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389531 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389635 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389781 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389876 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.389952 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390079 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390121 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390198 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.390490 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391245 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391474 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391702 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391734 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391769 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.391806 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392035 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392081 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392425 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392451 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392586 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392579 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392669 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392781 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392794 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.392838 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393071 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393093 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393152 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393724 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393770 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393803 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393833 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.393563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394468 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394705 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394890 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.394985 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395145 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395176 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395395 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395562 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395543 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.395950 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396002 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396092 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396100 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396319 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.396338 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.397520 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.397816 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.397994 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398192 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398265 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398295 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398097 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398618 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398649 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398707 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398745 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398781 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.398805 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399105 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399106 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399197 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399247 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399340 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.399480 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403332 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403402 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403446 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403480 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403510 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403540 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403570 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403597 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.403985 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404014 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404040 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404064 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404092 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404118 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404143 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404170 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.404195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411342 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411428 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411465 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411501 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411565 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411598 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411622 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411652 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411680 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411735 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411762 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411794 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411821 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411843 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411863 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411890 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411911 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.411984 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412013 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412037 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412058 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412079 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412108 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412133 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412154 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412174 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412204 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412255 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412283 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412326 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412352 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412370 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412395 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412423 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412453 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412431 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412479 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.412511 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413165 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413347 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413668 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413712 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413903 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.413928 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414137 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414257 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414383 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414605 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414644 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.414813 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.415023 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.415346 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.415732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.416057 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.419693 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420174 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420479 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420554 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.420735 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.421203 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.422277 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.422765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.424840 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.425818 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.425870 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.427689 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.430502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431012 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431036 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431350 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431361 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.431750 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432029 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432325 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432411 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432477 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432661 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432717 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432758 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432862 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.432881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433345 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433556 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433657 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.434076 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.434142 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.434606 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435004 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435058 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435009 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.433908 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435516 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.435621 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436276 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436342 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436373 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436394 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436418 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436635 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436684 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436725 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436757 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.436781 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437201 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437281 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437714 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437831 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.437754 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.438157 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.438736 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.438852 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439295 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439503 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439581 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439612 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439641 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439663 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439690 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.439714 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440017 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440387 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440621 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440685 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440727 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440754 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.440895 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.441188 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.441754 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.441900 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442028 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442087 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442177 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442231 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442261 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442365 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442631 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442708 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442745 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442785 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.442950 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443271 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443470 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.443377 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.943345468 +0000 UTC m=+21.320613380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443631 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443779 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443853 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.443927 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.447278 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448396 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448496 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448569 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448655 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448735 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448820 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448951 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449017 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449593 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449717 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451083 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451782 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451874 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451901 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451964 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451995 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452057 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452087 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452114 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452207 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452257 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452286 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452314 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452470 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452493 4842 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452507 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452522 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452542 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452555 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452570 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452583 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452601 4842 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452614 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452627 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452644 4842 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452656 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452677 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452690 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452706 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452720 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452734 4842 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452747 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452762 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452774 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452785 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452800 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452812 4842 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452825 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452837 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452851 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452862 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452874 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452886 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452900 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452911 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452924 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452936 4842 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452950 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452962 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452974 4842 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.452990 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453001 4842 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453015 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453028 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453043 4842 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453055 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453066 4842 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453078 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453092 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453104 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453117 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453129 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453144 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453157 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453168 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453182 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453194 4842 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453206 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453840 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453863 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453876 4842 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453889 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453902 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453920 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453933 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453945 4842 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453960 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453973 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453986 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.453998 4842 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454014 4842 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454026 4842 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454054 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454067 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454084 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454098 4842 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454110 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454124 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454139 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454152 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454165 4842 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454181 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454195 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454209 4842 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454245 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454259 4842 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454271 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454284 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454297 4842 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454312 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454324 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454337 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454354 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454366 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454378 4842 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454391 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454406 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454418 4842 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454430 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454445 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454463 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454477 4842 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454492 4842 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454503 4842 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454518 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454531 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454543 4842 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454560 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454572 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454583 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454595 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454610 4842 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454621 4842 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454632 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454642 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454657 4842 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454669 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454681 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454695 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454706 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454717 4842 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454728 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454742 4842 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454794 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454812 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454824 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454841 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454855 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454867 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454879 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454896 4842 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454909 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454922 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454939 4842 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454953 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454968 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454983 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455002 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455015 4842 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.448962 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449289 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449544 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.449558 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450117 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450505 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450582 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450654 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.450670 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451393 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.451686 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454342 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454612 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454893 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.454905 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455665 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.456049 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.456505 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.457446 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.457912 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.458037 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.458253 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.458501 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.462374 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.467465 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.469115 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.470033 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.472828 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.472938 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.473721 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.474102 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.474280 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.474337 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.974320804 +0000 UTC m=+21.351588716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.474598 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.474836 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.475056 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.475751 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.476433 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.476867 4842 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.478710 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.455026 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480739 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481170 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484286 4842 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484307 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484318 4842 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484328 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484339 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.480044 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484350 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480613 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480298 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480627 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484378 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.480946 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484461 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481079 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481354 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481506 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481844 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.481907 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.483976 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.484458 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.984435921 +0000 UTC m=+21.361704033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.484545 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.485316 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.485313 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.485835 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.486849 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.487497 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.488128 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.489177 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.489959 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.489980 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.489994 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.490045 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.990029068 +0000 UTC m=+21.367296980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.490290 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.490847 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.491193 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.492547 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.492717 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.492805 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.493189 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494010 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494034 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494047 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.494095 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:35.994083907 +0000 UTC m=+21.371351819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.494196 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.497577 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.497792 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.499178 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.499800 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.502332 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.503178 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.503262 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.503637 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.504800 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.505451 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.505901 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.506850 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.507034 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.507468 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.509035 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.509539 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.509765 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.510145 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.510647 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.511258 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.512170 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.512722 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.513512 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.513992 4842 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.514090 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.515624 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.515738 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.516610 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.516985 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.518507 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.519565 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.520096 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.521086 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.521716 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.522510 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.523063 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.523385 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.524001 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.524606 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.525403 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.525911 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.526721 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.527403 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.528191 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.528655 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.529498 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.530161 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.530710 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.531506 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.533801 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.542726 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544756 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.544766 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.553208 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.563643 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.573778 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.581765 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585208 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585255 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585307 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585318 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585328 4842 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585392 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585466 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585529 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585552 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585570 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585584 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585596 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585609 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585620 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585633 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585647 4842 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585662 4842 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585675 4842 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585687 4842 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585698 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585710 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585631 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585723 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585772 4842 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585788 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585801 4842 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585815 4842 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585827 4842 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585839 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585851 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585865 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585879 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585893 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585905 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585916 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585929 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585940 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585953 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585965 4842 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585976 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585988 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.585999 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586014 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586025 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586037 4842 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586048 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586060 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586071 4842 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586085 4842 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.586098 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.590068 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.620844 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.622497 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169" exitCode=255 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.622547 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.632488 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.643287 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648686 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648726 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.648743 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.652745 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.653016 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.662312 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.670048 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.670068 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.678067 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.679843 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.684821 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 02 06:46:35 crc kubenswrapper[4842]: W0202 06:46:35.687294 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31 WatchSource:0}: Error finding container 853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31: Status 404 returned error can't find the container with id 853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31 Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.689480 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.698495 4842 scope.go:117] "RemoveContainer" containerID="628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.698919 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.705264 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.719576 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.734416 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.746751 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.759741 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762849 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.762878 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868541 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.868806 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973626 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.973668 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:35Z","lastTransitionTime":"2026-02-02T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990198 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990301 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990345 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990369 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990340975 +0000 UTC m=+22.367608887 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:35 crc kubenswrapper[4842]: I0202 06:46:35.990425 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990448 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990500 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990484399 +0000 UTC m=+22.367752331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990545 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990572 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990563051 +0000 UTC m=+22.367830983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990575 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990589 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990600 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:35 crc kubenswrapper[4842]: E0202 06:46:35.990632 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:36.990625832 +0000 UTC m=+22.367893744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075790 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.075799 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.091150 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091289 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091311 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091322 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:36 crc kubenswrapper[4842]: E0202 06:46:36.091361 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:37.09134815 +0000 UTC m=+22.468616062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178142 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.178154 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.252782 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-02 06:41:35 +0000 UTC, rotation deadline is 2026-11-21 08:22:26.985333574 +0000 UTC Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.252881 4842 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7009h35m50.732456533s for next certificate rotation Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.281115 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.370256 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:15:19.517161165 +0000 UTC Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383244 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.383309 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485667 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.485709 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.550747 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p5hqr"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.551102 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.552933 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-q2xjl"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.553270 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.553829 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554190 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554287 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554688 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554864 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554882 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.554280 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.555651 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.556074 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.556463 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.562052 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-j7rrg"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.563951 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567010 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567104 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567161 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567204 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567200 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567249 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.567448 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.568716 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.571202 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.571474 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.571664 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.572134 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.573685 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gmkx9"] Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.574166 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.576701 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.576935 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588404 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.588477 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.589288 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595582 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595639 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595703 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-system-cni-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595733 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475lt\" (UniqueName: \"kubernetes.io/projected/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-kube-api-access-475lt\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595762 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0cc6e593-198e-4709-9026-103f892be5ff-proxy-tls\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595789 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595817 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595863 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-os-release\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595893 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0cc6e593-198e-4709-9026-103f892be5ff-rootfs\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595920 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595948 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.595976 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596009 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqr8f\" (UniqueName: \"kubernetes.io/projected/0cc6e593-198e-4709-9026-103f892be5ff-kube-api-access-kqr8f\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596038 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596066 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596095 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596128 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596165 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/110e0716-4e1c-49a1-acbb-016312fdb070-hosts-file\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596191 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596233 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596255 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596283 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596337 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596358 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cc6e593-198e-4709-9026-103f892be5ff-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596377 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596396 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596416 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596434 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596452 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596473 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cnibin\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.596500 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4jq8\" (UniqueName: \"kubernetes.io/projected/110e0716-4e1c-49a1-acbb-016312fdb070-kube-api-access-c4jq8\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.604881 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.626729 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.627518 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.630039 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.630807 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.631785 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c52035ea632a2c3e0a510756db259a4597bd6222111b1d7a316b030ee6ea0fe0"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.636424 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.636456 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"62145383b727e93d1fe22a7dfa6b24e7fd0cba3a9abb9b3ecd18dc16c39a6543"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638036 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638598 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638651 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.638667 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"853567595e3ffa664f5c5835b3b15e8d1a84a0bd0556c7e242ad01d82ea23b31"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.646560 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.663781 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691634 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.691670 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697346 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697407 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-os-release\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cc6e593-198e-4709-9026-103f892be5ff-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697492 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697529 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697577 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697605 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-etc-kubernetes\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697646 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-netns\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697680 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697710 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-multus-certs\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cnibin\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697765 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4jq8\" (UniqueName: \"kubernetes.io/projected/110e0716-4e1c-49a1-acbb-016312fdb070-kube-api-access-c4jq8\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697801 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-system-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697833 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cni-binary-copy\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697860 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-daemon-config\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697900 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697960 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-multus\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697996 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-socket-dir-parent\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698041 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698070 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cnibin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698096 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-k8s-cni-cncf-io\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698124 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-hostroot\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698150 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-system-cni-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698180 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475lt\" (UniqueName: \"kubernetes.io/projected/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-kube-api-access-475lt\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698241 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-os-release\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698273 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0cc6e593-198e-4709-9026-103f892be5ff-rootfs\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698299 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0cc6e593-198e-4709-9026-103f892be5ff-proxy-tls\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698329 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698359 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698400 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698444 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqr8f\" (UniqueName: \"kubernetes.io/projected/0cc6e593-198e-4709-9026-103f892be5ff-kube-api-access-kqr8f\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698476 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698502 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698528 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698601 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-bin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698688 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0cc6e593-198e-4709-9026-103f892be5ff-mcd-auth-proxy-config\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698701 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698768 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698792 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698816 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/110e0716-4e1c-49a1-acbb-016312fdb070-hosts-file\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698832 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698862 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698880 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698897 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698917 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-kubelet\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698918 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698937 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.698959 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-conf-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699037 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699059 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4nf6\" (UniqueName: \"kubernetes.io/projected/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-kube-api-access-k4nf6\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699170 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699204 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cni-binary-copy\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699260 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699299 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699331 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699577 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699643 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-system-cni-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.699959 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/110e0716-4e1c-49a1-acbb-016312fdb070-hosts-file\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700019 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700085 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700131 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700132 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-os-release\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0cc6e593-198e-4709-9026-103f892be5ff-rootfs\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700379 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700508 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700618 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.697790 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700711 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700776 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700827 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.700834 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.701087 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-cnibin\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.701556 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.702061 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.702230 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.704514 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.704882 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0cc6e593-198e-4709-9026-103f892be5ff-proxy-tls\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.726208 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"ovnkube-node-njnbq\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.726361 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475lt\" (UniqueName: \"kubernetes.io/projected/a55bc304-5cb2-4f7f-83b9-09d8188c73f2-kube-api-access-475lt\") pod \"multus-additional-cni-plugins-j7rrg\" (UID: \"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\") " pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.729082 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4jq8\" (UniqueName: \"kubernetes.io/projected/110e0716-4e1c-49a1-acbb-016312fdb070-kube-api-access-c4jq8\") pod \"node-resolver-q2xjl\" (UID: \"110e0716-4e1c-49a1-acbb-016312fdb070\") " pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.729087 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqr8f\" (UniqueName: \"kubernetes.io/projected/0cc6e593-198e-4709-9026-103f892be5ff-kube-api-access-kqr8f\") pod \"machine-config-daemon-p5hqr\" (UID: \"0cc6e593-198e-4709-9026-103f892be5ff\") " pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.736527 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.759539 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.760070 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-q2xjl" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.780271 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.790878 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.793916 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794767 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794786 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.794796 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807770 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-kubelet\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-conf-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807909 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4nf6\" (UniqueName: \"kubernetes.io/projected/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-kube-api-access-k4nf6\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807945 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-os-release\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807988 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-kubelet\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-etc-kubernetes\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.807988 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-conf-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808057 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-netns\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808063 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-os-release\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808094 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-multus-certs\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808123 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-netns\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808099 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-etc-kubernetes\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808159 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-system-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808196 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-multus-certs\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808201 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cni-binary-copy\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808261 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-system-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808263 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-daemon-config\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808316 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-multus\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-socket-dir-parent\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808510 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cnibin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808538 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-k8s-cni-cncf-io\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808562 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-hostroot\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808620 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808671 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-bin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808868 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cni-binary-copy\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-daemon-config\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808925 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-multus\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-hostroot\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808927 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-var-lib-cni-bin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808972 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-socket-dir-parent\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.808985 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-cnibin\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.809015 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-host-run-k8s-cni-cncf-io\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.809140 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-multus-cni-dir\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: W0202 06:46:36.816027 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55bc304_5cb2_4f7f_83b9_09d8188c73f2.slice/crio-c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361 WatchSource:0}: Error finding container c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361: Status 404 returned error can't find the container with id c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361 Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.826512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.834554 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4nf6\" (UniqueName: \"kubernetes.io/projected/c1fd21cd-ea6a-44a0-b136-f338fc97cf18-kube-api-access-k4nf6\") pod \"multus-gmkx9\" (UID: \"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\") " pod="openshift-multus/multus-gmkx9" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.859541 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.887137 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897488 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897534 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.897561 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.902870 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.920989 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.939530 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.952538 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.969514 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.997240 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:36Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999335 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:36 crc kubenswrapper[4842]: I0202 06:46:36.999368 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:36Z","lastTransitionTime":"2026-02-02T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.008502 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015444 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015560 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015592 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015619 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015594403 +0000 UTC m=+24.392862315 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015653 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.015681 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015694 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015682975 +0000 UTC m=+24.392950887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015766 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015779 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015790 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015812 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015817 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015810948 +0000 UTC m=+24.393078860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.015845 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.015838369 +0000 UTC m=+24.393106271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.020163 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.033245 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: W0202 06:46:37.041478 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc6e593_198e_4709_9026_103f892be5ff.slice/crio-2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c WatchSource:0}: Error finding container 2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c: Status 404 returned error can't find the container with id 2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102407 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102467 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102484 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102509 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.102523 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.108019 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gmkx9" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.116420 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116624 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116651 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116664 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.116728 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:39.11670953 +0000 UTC m=+24.493977442 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:37 crc kubenswrapper[4842]: W0202 06:46:37.119890 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1fd21cd_ea6a_44a0_b136_f338fc97cf18.slice/crio-26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a WatchSource:0}: Error finding container 26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a: Status 404 returned error can't find the container with id 26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.205654 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.307995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.308081 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.371051 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:20:58.772815456 +0000 UTC Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416511 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416540 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.416558 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.432988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.433007 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.433075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.433397 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.433509 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:37 crc kubenswrapper[4842]: E0202 06:46:37.433654 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.437452 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.438377 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.519136 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.621730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622190 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622346 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.622404 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.642967 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.643067 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"26110d07d162878059c0c70c3a6ebcb6741fd944930e4d0cb51d902fcab16a2a"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.644515 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09" exitCode=0 Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.644560 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.644686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerStarted","Data":"c1ba4f530cfacf87b0882cd023c3634cd5e10ef021e7cba897bc2d2d470d5361"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.646785 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" exitCode=0 Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.646891 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.646952 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"ad55e0c8d5649109a4ec1a9a3e073a9a325c6f3565638121dd923673a8430c3b"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.650428 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q2xjl" event={"ID":"110e0716-4e1c-49a1-acbb-016312fdb070","Type":"ContainerStarted","Data":"172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.650463 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-q2xjl" event={"ID":"110e0716-4e1c-49a1-acbb-016312fdb070","Type":"ContainerStarted","Data":"4eb673fa7258b1ad4a84348c36b407715714c46244de27067b0ca28eaf6a9837"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.652708 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.652766 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.652790 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"2e6abd8f11c46b3911c9657f74809f3ba3dc9f664743cc6cb4f89a69d41d451c"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.656782 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.674124 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.686996 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.702457 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.715564 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724236 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724257 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.724269 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.731102 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.748762 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.776401 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827713 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.827722 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.843853 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.859421 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.874308 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.888614 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.901030 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.916608 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930396 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.930406 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:37Z","lastTransitionTime":"2026-02-02T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.939666 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.954510 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.967628 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.983015 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:37 crc kubenswrapper[4842]: I0202 06:46:37.999516 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.013570 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032900 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032914 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.032924 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.038618 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.060181 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.077668 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.093608 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135555 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.135634 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.237985 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.238082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340838 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.340851 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.372304 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:49:46.385788002 +0000 UTC Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.444799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445094 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445105 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445126 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.445138 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.547680 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.650190 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.663749 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75" exitCode=0 Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.663830 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673906 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673943 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673957 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673969 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.673980 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.676687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.681891 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.699503 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.715735 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.742461 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754280 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.754390 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.758667 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.779677 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.825284 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.853058 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856341 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856351 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.856375 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.874418 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.897577 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.912536 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.924694 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.936794 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.947549 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.958458 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959328 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.959338 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:38Z","lastTransitionTime":"2026-02-02T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.970133 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.983856 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:38 crc kubenswrapper[4842]: I0202 06:46:38.996244 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.005898 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.016286 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.027483 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036531 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036681 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036735 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.036704532 +0000 UTC m=+28.413972464 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036850 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036873 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036889 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036926 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.036887 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036986 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.036937 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.036921667 +0000 UTC m=+28.414189589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.037137 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.037112522 +0000 UTC m=+28.414380454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.037156 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.037146782 +0000 UTC m=+28.414414704 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.039886 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.061685 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.064448 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.084811 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.138649 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.138882 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.138918 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.138938 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.139024 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:43.138999328 +0000 UTC m=+28.516267280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165381 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165422 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.165441 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268917 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268940 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.268991 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371602 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.371615 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.372699 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:23:00.118996957 +0000 UTC Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.433665 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.433741 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.433816 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.433914 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.434105 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:39 crc kubenswrapper[4842]: E0202 06:46:39.434254 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474427 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474479 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.474540 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577824 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.577941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680759 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.680834 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.683847 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf" exitCode=0 Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.683967 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.701959 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.718766 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.741132 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.770669 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.785978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.786082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.797498 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.822632 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.838375 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.857535 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.871504 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.888200 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892430 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892498 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.892509 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.906795 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.922273 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995489 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995512 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:39 crc kubenswrapper[4842]: I0202 06:46:39.995529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:39Z","lastTransitionTime":"2026-02-02T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.045729 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ms7n2"] Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.046168 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.048165 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.048448 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.049144 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.049670 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.065114 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.081402 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.097152 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098561 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.098631 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.109120 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.125013 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.141200 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153554 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tn4\" (UniqueName: \"kubernetes.io/projected/f026f084-0079-47a5-906c-14eb439eaa86-kube-api-access-h7tn4\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153739 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f026f084-0079-47a5-906c-14eb439eaa86-serviceca\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.153823 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f026f084-0079-47a5-906c-14eb439eaa86-host\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.168397 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.184145 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202283 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.202413 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.206965 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.226135 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.243810 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255183 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tn4\" (UniqueName: \"kubernetes.io/projected/f026f084-0079-47a5-906c-14eb439eaa86-kube-api-access-h7tn4\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255306 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f026f084-0079-47a5-906c-14eb439eaa86-serviceca\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255392 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f026f084-0079-47a5-906c-14eb439eaa86-host\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.255546 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f026f084-0079-47a5-906c-14eb439eaa86-host\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.257433 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f026f084-0079-47a5-906c-14eb439eaa86-serviceca\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.273753 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.278271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tn4\" (UniqueName: \"kubernetes.io/projected/f026f084-0079-47a5-906c-14eb439eaa86-kube-api-access-h7tn4\") pod \"node-ca-ms7n2\" (UID: \"f026f084-0079-47a5-906c-14eb439eaa86\") " pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306676 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306805 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.306828 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.373162 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:48:05.169687077 +0000 UTC Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.394553 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ms7n2" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.412341 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.412887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.413024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.413149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.413289 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: W0202 06:46:40.416673 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf026f084_0079_47a5_906c_14eb439eaa86.slice/crio-dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6 WatchSource:0}: Error finding container dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6: Status 404 returned error can't find the container with id dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6 Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516416 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.516484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.620765 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.692785 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa" exitCode=0 Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.692879 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.703799 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.705727 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ms7n2" event={"ID":"f026f084-0079-47a5-906c-14eb439eaa86","Type":"ContainerStarted","Data":"9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.705805 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ms7n2" event={"ID":"f026f084-0079-47a5-906c-14eb439eaa86","Type":"ContainerStarted","Data":"dbca84cd798ea1e5b2203ed571b2cb6d7aceb6e504160af882b45da434623db6"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.717468 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.726572 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.737110 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.752650 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.781316 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.798935 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.814552 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829418 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.829466 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.830017 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.841634 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.860448 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.872425 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.889510 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.901073 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.913974 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.927449 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.932482 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:40Z","lastTransitionTime":"2026-02-02T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.939968 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.954840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.968844 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.981772 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:40 crc kubenswrapper[4842]: I0202 06:46:40.992190 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:40Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.002209 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.016044 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.034766 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037456 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037531 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.037675 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.045964 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.087736 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.104612 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.117456 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143573 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.143645 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247266 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247281 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.247336 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.295647 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.301571 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.310461 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.313559 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.330334 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.344113 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.349926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.349997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.350018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.350036 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.350049 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.356559 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.373263 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.373524 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:37:13.234048348 +0000 UTC Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.392285 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.408840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.425797 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.433561 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:41 crc kubenswrapper[4842]: E0202 06:46:41.433717 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.434103 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:41 crc kubenswrapper[4842]: E0202 06:46:41.434159 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.434280 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:41 crc kubenswrapper[4842]: E0202 06:46:41.434337 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.444267 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.453167 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.468054 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.509150 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.528732 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.554171 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560602 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.560781 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.571572 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.586772 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.602265 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.626649 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.644363 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.659673 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664233 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.664246 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.676086 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.691498 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.711697 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.714342 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017" exitCode=0 Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.714443 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.730766 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.761721 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772344 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772423 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.772483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.794105 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.818328 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.838175 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.855287 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.872414 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876790 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.876865 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.895163 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.922016 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.946185 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.968630 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:41 crc kubenswrapper[4842]: I0202 06:46:41.979726 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:41Z","lastTransitionTime":"2026-02-02T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.000722 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:41Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.025135 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.065330 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.083910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.083977 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.083996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.084027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.084051 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.102611 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.124842 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.137963 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.150736 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.165211 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187806 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187900 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187932 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.187951 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.291435 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.374413 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:32:03.202381396 +0000 UTC Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394579 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394697 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.394854 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498539 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.498582 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.601898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.602542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.602840 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.603013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.603213 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.706727 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.723388 4842 generic.go:334] "Generic (PLEG): container finished" podID="a55bc304-5cb2-4f7f-83b9-09d8188c73f2" containerID="34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5" exitCode=0 Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.723481 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerDied","Data":"34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.742651 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.756371 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.778468 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.799535 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.814855 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.831408 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.854191 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.868166 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.890638 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.914561 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922928 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.922943 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:42Z","lastTransitionTime":"2026-02-02T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.932151 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.944873 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.958142 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.970978 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:42 crc kubenswrapper[4842]: I0202 06:46:42.987969 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:42Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025402 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.025480 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089586 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.089641 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089752 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089805 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.089791852 +0000 UTC m=+36.467059764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089883 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089911 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089925 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.089981 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.089960406 +0000 UTC m=+36.467228318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.090103 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.090053 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.090046488 +0000 UTC m=+36.467314400 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.090782 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.090762556 +0000 UTC m=+36.468030488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127966 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127981 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.127992 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.190430 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190642 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190662 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190673 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.190732 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.190717035 +0000 UTC m=+36.567984947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.231098 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334294 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334351 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.334373 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.378780 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:00:45.077555876 +0000 UTC Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.436383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.436536 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.436546 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.436635 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.436810 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:43 crc kubenswrapper[4842]: E0202 06:46:43.436932 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.438084 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541385 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.541397 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644587 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.644695 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.734369 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" event={"ID":"a55bc304-5cb2-4f7f-83b9-09d8188c73f2","Type":"ContainerStarted","Data":"22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.742750 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.743125 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.748504 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.764806 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.778418 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.783900 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.801861 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.822749 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.845012 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851485 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.851603 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.866774 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.888962 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.906462 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.922905 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.940041 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.954705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.954925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955246 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:43Z","lastTransitionTime":"2026-02-02T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.955368 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.971114 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:43 crc kubenswrapper[4842]: I0202 06:46:43.985687 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.000443 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:43Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.022455 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.045593 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058422 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.058945 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.069588 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.091643 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.110606 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.132555 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.153988 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162213 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162275 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.162336 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.175745 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.193413 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.217950 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.238364 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.264099 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.265123 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.283668 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.300459 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.368382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.368754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.368949 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.369110 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.369479 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.379847 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:26:17.431608144 +0000 UTC Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.473956 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577500 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.577562 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.680623 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.746959 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.747639 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.815921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.815998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.816017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.816051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.816076 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.820526 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.845599 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.868093 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.892703 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.920268 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.921322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.921563 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:44Z","lastTransitionTime":"2026-02-02T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.949063 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.967148 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:44 crc kubenswrapper[4842]: I0202 06:46:44.983487 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.002800 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:44Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025190 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.025347 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.028488 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.055356 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.081720 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.119633 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.127997 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.153347 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.178475 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.187493 4842 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.230923 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333463 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.333495 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.380931 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:18:01.607470475 +0000 UTC Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.432540 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.432653 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.432796 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.432797 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.432872 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.432984 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436148 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.436237 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.454417 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.474761 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.488793 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.502920 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.524716 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540368 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540182 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540432 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.540647 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.545845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.545936 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.546017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.546172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.546335 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.553507 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.564150 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.564438 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.570953 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.571028 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.578685 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.583961 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587759 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.587798 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.598201 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.604062 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607964 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607975 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607989 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.607999 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.614706 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.620450 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.624634 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.630251 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.642570 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: E0202 06:46:45.642966 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645141 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645187 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.645295 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.653545 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.674635 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748764 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748818 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.748666 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858767 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.858904 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968486 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:45 crc kubenswrapper[4842]: I0202 06:46:45.968553 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:45Z","lastTransitionTime":"2026-02-02T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071475 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.071593 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.173839 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276368 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276435 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276479 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.276498 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379091 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379111 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379133 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.379150 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.381118 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:26:42.595104879 +0000 UTC Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.481906 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584363 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584426 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584444 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.584489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687749 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.687768 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.755178 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/0.log" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.759381 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5" exitCode=1 Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.759442 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.760606 4842 scope.go:117] "RemoveContainer" containerID="2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.783855 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790723 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.790745 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.805127 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.830295 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.852575 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.870442 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.882980 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.892852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.892912 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.892930 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.893004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.893023 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.900918 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.917543 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.933865 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.953848 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.987755 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:46Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995391 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:46 crc kubenswrapper[4842]: I0202 06:46:46.995424 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:46Z","lastTransitionTime":"2026-02-02T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.015943 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.036143 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.056847 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097675 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.097723 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201070 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201125 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201170 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.201189 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304102 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.304164 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.381528 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:04:09.650156536 +0000 UTC Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.406893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.432453 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:47 crc kubenswrapper[4842]: E0202 06:46:47.432586 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.432691 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:47 crc kubenswrapper[4842]: E0202 06:46:47.432826 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.433045 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:47 crc kubenswrapper[4842]: E0202 06:46:47.433154 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509262 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509270 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.509291 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611473 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611498 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.611527 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713539 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.713559 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.764881 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/0.log" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.767712 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.767911 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.786099 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.799646 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816229 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816238 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.816262 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.821448 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.836312 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.854248 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.871002 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.883016 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.915243 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.918082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:47Z","lastTransitionTime":"2026-02-02T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.938504 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.959503 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.971204 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.980919 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:47 crc kubenswrapper[4842]: I0202 06:46:47.992966 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.004515 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020209 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020244 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020266 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.020280 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.122986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.123064 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.225952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.226121 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329305 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.329378 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.335469 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.357446 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.378949 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.382268 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:49:05.173493738 +0000 UTC Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.399580 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.419738 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432806 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.432824 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.443143 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.460249 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.476829 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.498328 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.518911 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535567 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.535621 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.545367 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.564328 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.595642 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.618473 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638468 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638390 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.638522 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.741774 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.774874 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.775909 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/0.log" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.780653 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" exitCode=1 Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.780745 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.780846 4842 scope.go:117] "RemoveContainer" containerID="2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.781672 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:46:48 crc kubenswrapper[4842]: E0202 06:46:48.781928 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.805768 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.822650 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.838746 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849649 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.849975 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.856828 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.876730 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.892985 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.908946 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.920847 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.933713 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.951690 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952735 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.952913 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:48Z","lastTransitionTime":"2026-02-02T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.969106 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:48 crc kubenswrapper[4842]: I0202 06:46:48.984746 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.002823 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2b05a6c8e30bfc10a9d0ffd9524ead56223a744b2799856c542758af23d773e5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:45Z\\\",\\\"message\\\":\\\"1 6111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.915707 6111 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0202 06:46:45.916042 6111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0202 06:46:45.916095 6111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0202 06:46:45.916105 6111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:46:45.916143 6111 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0202 06:46:45.916155 6111 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0202 06:46:45.916170 6111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0202 06:46:45.916188 6111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:46:45.916197 6111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:46:45.916204 6111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:46:45.916266 6111 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:46:45.916303 6111 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:46:45.916310 6111 factory.go:656] Stopping watch factory\\\\nI0202 06:46:45.916334 6111 ovnkube.go:599] Stopped ovnkube\\\\nI0202 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:48Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.025307 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.056835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057401 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.057557 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.160732 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.264322 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.367131 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.367525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.367997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.368357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.368484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.383759 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:44:27.995604323 +0000 UTC Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.433371 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.433420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.433538 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.433383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.433693 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.433816 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.471854 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.573931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574641 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.574863 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.677951 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.678068 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781177 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781706 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.781974 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.787830 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.794193 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:46:49 crc kubenswrapper[4842]: E0202 06:46:49.794524 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.816906 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.835993 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.853484 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.872185 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886116 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.886139 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.893142 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.913471 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.929104 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.944807 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.959046 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.981170 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.989946 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:49Z","lastTransitionTime":"2026-02-02T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.995273 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm"] Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.996096 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.998197 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 06:46:49 crc kubenswrapper[4842]: I0202 06:46:49.998395 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.001945 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:49Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.024178 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.053345 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068382 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068591 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068720 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.068815 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wlzs\" (UniqueName: \"kubernetes.io/projected/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-kube-api-access-8wlzs\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.075433 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.090566 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092463 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092558 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092581 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.092599 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.110433 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.127612 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.139908 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.155395 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169492 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169558 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.169655 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wlzs\" (UniqueName: \"kubernetes.io/projected/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-kube-api-access-8wlzs\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.170932 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.171491 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.181804 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.181992 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195333 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195413 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.195463 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.203280 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wlzs\" (UniqueName: \"kubernetes.io/projected/cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3-kube-api-access-8wlzs\") pod \"ovnkube-control-plane-749d76644c-gkdfm\" (UID: \"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.209077 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.228708 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.257373 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.283051 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.298196 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.300527 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.317702 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.317740 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.336862 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.354817 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.368278 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.384096 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:05:52.911008211 +0000 UTC Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.401650 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.503967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.503992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.504018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.504032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.504041 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606587 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.606629 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.711753 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.800408 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" event={"ID":"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3","Type":"ContainerStarted","Data":"73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.800462 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" event={"ID":"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3","Type":"ContainerStarted","Data":"2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.800478 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" event={"ID":"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3","Type":"ContainerStarted","Data":"30dc0e188446265183d7471d7abe21748afca9fd3abb7dc4c4d1557bc2fc214d"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.813946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.814058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.828337 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.844206 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.859289 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.875706 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.890379 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.904428 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.912515 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.916988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.917111 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:50Z","lastTransitionTime":"2026-02-02T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.922439 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.941647 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.960485 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:50 crc kubenswrapper[4842]: I0202 06:46:50.981041 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:50Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.009380 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.020082 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.032389 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.044325 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.065510 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.123827 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.141711 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9chjr"] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.142487 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.142583 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.164822 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182660 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182818 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182883 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.182927 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183041 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183107 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.1830866 +0000 UTC m=+52.560354552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183199 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.183185032 +0000 UTC m=+52.560452974 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183338 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183384 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.183371167 +0000 UTC m=+52.560639119 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183490 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183521 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183541 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.183630 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.183611892 +0000 UTC m=+52.560879834 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.186394 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.204212 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.227146 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228894 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228917 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.228975 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.253055 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.271181 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.284533 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284693 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284733 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284760 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.284768 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5htc5\" (UniqueName: \"kubernetes.io/projected/4f6c3b51-669c-4c7b-a23a-ed68d139849e-kube-api-access-5htc5\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.284838 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:07.284815882 +0000 UTC m=+52.662083824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.284912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.289309 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.304982 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.321482 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.332345 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.345916 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.368611 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.384700 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:01:23.567711841 +0000 UTC Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.386353 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.386457 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5htc5\" (UniqueName: \"kubernetes.io/projected/4f6c3b51-669c-4c7b-a23a-ed68d139849e-kube-api-access-5htc5\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.387038 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.387175 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:51.887145719 +0000 UTC m=+37.264413661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.397481 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.421487 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5htc5\" (UniqueName: \"kubernetes.io/projected/4f6c3b51-669c-4c7b-a23a-ed68d139849e-kube-api-access-5htc5\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.429618 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.432872 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.432933 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.433030 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.433070 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.433251 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.433411 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435831 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435899 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435960 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.435984 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.458948 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.476149 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.496925 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:51Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539573 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.539706 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.642857 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.746987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.747004 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850488 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.850548 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.891763 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.892066 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: E0202 06:46:51.892165 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:52.892138291 +0000 UTC m=+38.269406233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954384 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954409 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:51 crc kubenswrapper[4842]: I0202 06:46:51.954429 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:51Z","lastTransitionTime":"2026-02-02T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057643 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057700 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.057761 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161109 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161201 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161248 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.161266 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264928 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.264990 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.367719 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.385501 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:47:16.056349826 +0000 UTC Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.432848 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.433119 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472463 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.472635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.575700 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.678264 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784330 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.784397 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887370 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887448 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.887473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.901015 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.901251 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.901379 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:54.901351918 +0000 UTC m=+40.278619860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.971694 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.973141 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:46:52 crc kubenswrapper[4842]: E0202 06:46:52.973424 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:52 crc kubenswrapper[4842]: I0202 06:46:52.989887 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:52Z","lastTransitionTime":"2026-02-02T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.092782 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.195940 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.196073 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.299539 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.386189 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:31:13.527043198 +0000 UTC Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.403884 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.403967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.403984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.404008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.404025 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.433525 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.433615 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:53 crc kubenswrapper[4842]: E0202 06:46:53.433700 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:53 crc kubenswrapper[4842]: E0202 06:46:53.433783 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.433873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:53 crc kubenswrapper[4842]: E0202 06:46:53.434017 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508499 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.508610 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611661 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.611678 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714364 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714458 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.714475 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.817771 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.921384 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.921750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.921883 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.922023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:53 crc kubenswrapper[4842]: I0202 06:46:53.922154 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:53Z","lastTransitionTime":"2026-02-02T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.025889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.025995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.026018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.026047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.026066 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129415 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.129546 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232197 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232248 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.232268 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.335390 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.386938 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:47:39.105662089 +0000 UTC Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.432608 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:54 crc kubenswrapper[4842]: E0202 06:46:54.432815 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441430 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441540 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.441580 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544550 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.544714 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648186 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648422 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.648529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751874 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.751885 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.855683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.926617 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:54 crc kubenswrapper[4842]: E0202 06:46:54.926822 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:54 crc kubenswrapper[4842]: E0202 06:46:54.926961 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:46:58.926932055 +0000 UTC m=+44.304199997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:54 crc kubenswrapper[4842]: I0202 06:46:54.959629 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:54Z","lastTransitionTime":"2026-02-02T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062953 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.062990 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.063005 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166062 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.166149 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.269713 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373209 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373310 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373328 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.373370 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.388160 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:05:45.349260439 +0000 UTC Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.432967 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.433039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.433039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.433181 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.433373 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.433525 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.456299 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476200 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.476898 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.495425 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.522840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.543209 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.566133 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.579387 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.587704 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.602065 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.617209 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.633484 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.652051 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.670963 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.682486 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.696188 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.723166 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.742679 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.779754 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.786180 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.889860 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974975 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.974997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:55 crc kubenswrapper[4842]: I0202 06:46:55.975014 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:55Z","lastTransitionTime":"2026-02-02T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:55 crc kubenswrapper[4842]: E0202 06:46:55.996922 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002000 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.002166 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.045987 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.055755 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.079042 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.084954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085019 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.085058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.101733 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105749 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.105829 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.122433 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:46:56Z is after 2025-08-24T17:21:41Z" Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.122628 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.124605 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227608 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227721 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.227775 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330398 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330423 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.330473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.388755 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:52:49.015804966 +0000 UTC Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.432894 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:56 crc kubenswrapper[4842]: E0202 06:46:56.433109 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433409 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.433494 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.536878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.536976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.536997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.537022 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.537040 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640383 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640455 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640479 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.640497 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743427 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743487 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.743550 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846838 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.846897 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950201 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:56 crc kubenswrapper[4842]: I0202 06:46:56.950301 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:56Z","lastTransitionTime":"2026-02-02T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053575 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.053635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156786 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.156895 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259543 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.259787 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362945 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362968 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.362982 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.389659 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:09:01.410578989 +0000 UTC Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.432871 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.433031 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:57 crc kubenswrapper[4842]: E0202 06:46:57.433095 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.432871 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:57 crc kubenswrapper[4842]: E0202 06:46:57.433294 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:57 crc kubenswrapper[4842]: E0202 06:46:57.433404 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.468520 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.571962 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.572103 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.675879 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779640 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.779682 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.883714 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:57 crc kubenswrapper[4842]: I0202 06:46:57.986941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:57Z","lastTransitionTime":"2026-02-02T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090781 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090868 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.090941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.193990 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194077 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.194125 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298145 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.298166 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.390782 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 22:10:21.106797238 +0000 UTC Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.402249 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.433539 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:58 crc kubenswrapper[4842]: E0202 06:46:58.433784 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505585 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.505681 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.609669 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.713708 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816782 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.816804 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.919909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920035 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.920087 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:58Z","lastTransitionTime":"2026-02-02T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:58 crc kubenswrapper[4842]: I0202 06:46:58.983069 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:46:58 crc kubenswrapper[4842]: E0202 06:46:58.983446 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:58 crc kubenswrapper[4842]: E0202 06:46:58.983628 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:47:06.983586834 +0000 UTC m=+52.360854916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023396 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023684 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.023998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.024156 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127611 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127655 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.127673 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231786 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.231933 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335187 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335307 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335356 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.335375 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.391565 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:07:38.155641059 +0000 UTC Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.433456 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.433528 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.433551 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:46:59 crc kubenswrapper[4842]: E0202 06:46:59.433699 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:46:59 crc kubenswrapper[4842]: E0202 06:46:59.433894 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:46:59 crc kubenswrapper[4842]: E0202 06:46:59.434132 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.438956 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543068 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543183 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.543202 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646641 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.646683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749403 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749420 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.749461 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852847 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.852982 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.853004 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:46:59 crc kubenswrapper[4842]: I0202 06:46:59.956508 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:46:59Z","lastTransitionTime":"2026-02-02T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.060446 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.060864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.061049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.061421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.061645 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.164958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.165159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273472 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273487 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.273522 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376903 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376950 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.376975 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.391939 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:38:34.269235358 +0000 UTC Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.432883 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:00 crc kubenswrapper[4842]: E0202 06:47:00.433075 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.480896 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.585954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586025 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.586069 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.689282 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792885 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.792902 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896246 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.896283 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:00 crc kubenswrapper[4842]: I0202 06:47:00.998976 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:00Z","lastTransitionTime":"2026-02-02T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.101662 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.205488 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.308549 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.392979 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:26:29.403231608 +0000 UTC Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411559 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.411661 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.433156 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.433206 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:01 crc kubenswrapper[4842]: E0202 06:47:01.433358 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.433397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:01 crc kubenswrapper[4842]: E0202 06:47:01.433543 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:01 crc kubenswrapper[4842]: E0202 06:47:01.433682 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514209 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514262 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.514303 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617710 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617795 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.617837 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.720635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824247 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824266 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.824311 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:01 crc kubenswrapper[4842]: I0202 06:47:01.927661 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:01Z","lastTransitionTime":"2026-02-02T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.030980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.031102 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.134902 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.135074 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238354 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238459 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.238519 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341481 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.341548 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.393320 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:41:16.779465721 +0000 UTC Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.433070 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:02 crc kubenswrapper[4842]: E0202 06:47:02.433289 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444170 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.444210 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.546939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547006 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547030 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547060 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.547084 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649714 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.649779 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753088 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.753196 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857317 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.857338 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:02 crc kubenswrapper[4842]: I0202 06:47:02.959639 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:02Z","lastTransitionTime":"2026-02-02T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.062881 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166595 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.166638 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269643 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.269771 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.372995 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.393676 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:32:28.465455856 +0000 UTC Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.432550 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.432622 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.432654 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:03 crc kubenswrapper[4842]: E0202 06:47:03.432795 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:03 crc kubenswrapper[4842]: E0202 06:47:03.432862 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:03 crc kubenswrapper[4842]: E0202 06:47:03.432921 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.478477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.478717 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.478890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.479503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.479540 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582814 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.582855 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.686936 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.686989 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.686999 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.687013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.687025 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.790606 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893444 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893505 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.893580 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997533 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:03 crc kubenswrapper[4842]: I0202 06:47:03.997549 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:03Z","lastTransitionTime":"2026-02-02T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.101389 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204840 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.204860 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307559 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.307605 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.394825 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:26:31.71126672 +0000 UTC Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409774 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.409816 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.433200 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:04 crc kubenswrapper[4842]: E0202 06:47:04.433708 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.511855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512409 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.512718 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.615892 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.718752 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.821998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822102 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.822119 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:04 crc kubenswrapper[4842]: I0202 06:47:04.924745 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:04Z","lastTransitionTime":"2026-02-02T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027541 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.027568 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130586 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.130697 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233636 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233717 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.233732 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.336594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337299 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.337384 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.396106 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 18:26:39.887517205 +0000 UTC Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.432583 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.432583 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.432610 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:05 crc kubenswrapper[4842]: E0202 06:47:05.432832 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:05 crc kubenswrapper[4842]: E0202 06:47:05.433522 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:05 crc kubenswrapper[4842]: E0202 06:47:05.433615 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.434392 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.441797 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.462529 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.480872 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.497840 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.514784 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.529464 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.541735 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544249 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544359 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.544428 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.560452 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.575591 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.589203 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.606406 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.625923 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.641553 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.649695 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650159 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.650363 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.656238 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.669495 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.689512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.708311 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.752976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753033 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.753091 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.855947 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.870927 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.873167 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.873526 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.893176 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.912137 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.930199 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.953404 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.957905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.957970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.957993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.958024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.958048 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:05Z","lastTransitionTime":"2026-02-02T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.969453 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.985138 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:05 crc kubenswrapper[4842]: I0202 06:47:05.998583 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:05Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.023125 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.058641 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060706 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060736 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060745 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.060766 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.078025 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.094507 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.118970 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.138118 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.158749 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164427 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.164466 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.173022 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.176592 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.197425 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.207179 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.231779 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.248057 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.248736 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.252995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.253072 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.271030 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.276145 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.280835 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.285160 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.294571 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.298453 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.300271 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.314480 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.316004 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.318959 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.332616 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.333568 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.333721 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336183 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336257 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336280 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.336300 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.350873 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.362836 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.380768 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.396376 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:29:56.279199742 +0000 UTC Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.397108 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.414057 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.428044 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.432601 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.432812 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.439507 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.443868 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.468430 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.486924 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.502491 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.514467 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.542207 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646200 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.646254 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750361 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750403 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.750421 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853872 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853914 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.853928 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.881590 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.882560 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/1.log" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.886693 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" exitCode=1 Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.886776 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.886850 4842 scope.go:117] "RemoveContainer" containerID="be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.887876 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:06 crc kubenswrapper[4842]: E0202 06:47:06.888122 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.912390 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.931490 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.946645 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.956628 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:06Z","lastTransitionTime":"2026-02-02T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.962087 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:06 crc kubenswrapper[4842]: I0202 06:47:06.990504 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be04c29f14a6b215fdf879a81e80710469ad64ea69ecd805614011c41944520c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:46:47Z\\\",\\\"message\\\":\\\"-lifecycle-manager/packageserver-service_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/packageserver-service\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.153\\\\\\\", Port:5443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0202 06:46:47.800293 6264 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0202 06:46:47.800304 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.009868 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.026996 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.041677 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059244 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.059325 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.062165 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.069622 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.069747 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.069831 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:47:23.069812358 +0000 UTC m=+68.447080270 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.077101 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.093855 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.109774 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.130745 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.147424 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.157834 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162777 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.162788 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.175929 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.189310 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265436 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265491 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.265499 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271710 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271803 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.271874 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.271939 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.271979 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.271965771 +0000 UTC m=+84.649233673 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272038 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.272031572 +0000 UTC m=+84.649299484 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272088 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272108 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.272102844 +0000 UTC m=+84.649370756 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272154 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272164 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272173 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.272194 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.272188636 +0000 UTC m=+84.649456548 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.369142 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.372983 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373177 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373212 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373273 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.373364 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:47:39.373336674 +0000 UTC m=+84.750604616 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.397500 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:22:13.189612462 +0000 UTC Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.433198 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.433262 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.433287 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.433428 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.433547 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.433655 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472527 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.472544 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575924 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575943 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575966 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.575984 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.679147 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783698 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783727 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.783748 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.888989 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.894861 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.905617 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:07 crc kubenswrapper[4842]: E0202 06:47:07.905892 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.925598 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.944030 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.962137 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.981511 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:07 crc kubenswrapper[4842]: I0202 06:47:07.992322 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:07Z","lastTransitionTime":"2026-02-02T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:07.999931 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:07Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.018581 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.034357 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.049665 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.069186 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.084814 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095276 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.095300 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.105116 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.126667 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.147142 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.181392 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198583 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.198605 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.207320 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.226007 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.247207 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:08Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303187 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303211 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.303999 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.398426 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:53:28.093596147 +0000 UTC Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.408286 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.433370 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:08 crc kubenswrapper[4842]: E0202 06:47:08.433548 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511075 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511150 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511178 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.511196 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614746 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.614810 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.717858 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.821473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.924998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925075 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:08 crc kubenswrapper[4842]: I0202 06:47:08.925091 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:08Z","lastTransitionTime":"2026-02-02T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029256 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.029381 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132924 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.132959 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.236169 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338567 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.338585 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.398737 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:24:17.076782307 +0000 UTC Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.433408 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.433462 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.433576 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:09 crc kubenswrapper[4842]: E0202 06:47:09.433803 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:09 crc kubenswrapper[4842]: E0202 06:47:09.434102 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:09 crc kubenswrapper[4842]: E0202 06:47:09.434199 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.446911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.446979 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.446998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.447028 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.447051 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.550883 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654579 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654646 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.654679 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.758466 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862411 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.862514 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:09 crc kubenswrapper[4842]: I0202 06:47:09.966484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:09Z","lastTransitionTime":"2026-02-02T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069957 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.069978 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173188 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.173240 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276509 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.276555 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.379922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.379997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.380017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.380048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.380068 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.400008 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:16:43.424681894 +0000 UTC Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.433489 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:10 crc kubenswrapper[4842]: E0202 06:47:10.433640 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.481943 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.481983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.481992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.482008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.482018 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.584341 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.687683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790426 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790485 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.790493 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.894825 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998620 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:10 crc kubenswrapper[4842]: I0202 06:47:10.998667 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:10Z","lastTransitionTime":"2026-02-02T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102550 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.102576 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.205988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.206139 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.309361 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.400882 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:33:31.766034969 +0000 UTC Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.412920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.412982 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.413001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.413025 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.413045 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.433548 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.433645 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.433853 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:11 crc kubenswrapper[4842]: E0202 06:47:11.433855 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:11 crc kubenswrapper[4842]: E0202 06:47:11.434080 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:11 crc kubenswrapper[4842]: E0202 06:47:11.434452 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517253 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517285 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.517306 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620265 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.620278 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729011 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729333 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.729367 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832420 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.832438 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:11 crc kubenswrapper[4842]: I0202 06:47:11.935596 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:11Z","lastTransitionTime":"2026-02-02T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040448 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.040497 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144137 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.144204 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247871 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247926 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.247950 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.351490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.401848 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:21:36.895349 +0000 UTC Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.432864 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:12 crc kubenswrapper[4842]: E0202 06:47:12.433052 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.455498 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559448 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559543 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559562 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.559625 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.662700 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766175 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766203 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.766254 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.869884 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:12 crc kubenswrapper[4842]: I0202 06:47:12.974198 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:12Z","lastTransitionTime":"2026-02-02T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.077925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.077995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.078007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.078031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.078049 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181523 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.181540 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285280 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.285360 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388352 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.388388 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.402978 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:31:22.418927952 +0000 UTC Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.433331 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.433378 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.433504 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:13 crc kubenswrapper[4842]: E0202 06:47:13.433762 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:13 crc kubenswrapper[4842]: E0202 06:47:13.433910 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:13 crc kubenswrapper[4842]: E0202 06:47:13.434137 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492070 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.492271 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595906 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.595963 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699447 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699467 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.699511 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.802625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803398 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.803526 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906812 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:13 crc kubenswrapper[4842]: I0202 06:47:13.906888 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:13Z","lastTransitionTime":"2026-02-02T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.009995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010093 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.010110 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.113685 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216362 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216442 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.216489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319667 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.319684 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.404164 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:06:13.661010312 +0000 UTC Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422560 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.422673 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.433119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:14 crc kubenswrapper[4842]: E0202 06:47:14.433383 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.525929 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.628983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.629001 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732812 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.732923 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836307 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.836327 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:14 crc kubenswrapper[4842]: I0202 06:47:14.939537 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:14Z","lastTransitionTime":"2026-02-02T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043473 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.043528 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147232 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.147261 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250588 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250636 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.250655 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.354252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.354704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.354913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.355122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.355362 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.404442 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 17:40:26.332310697 +0000 UTC Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.436151 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:15 crc kubenswrapper[4842]: E0202 06:47:15.436602 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.436897 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:15 crc kubenswrapper[4842]: E0202 06:47:15.436963 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.437131 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:15 crc kubenswrapper[4842]: E0202 06:47:15.437316 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457660 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457700 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.457717 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.461974 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.480876 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.506324 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.522314 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.536810 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.553920 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.559980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.560135 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.569994 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.588108 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.611313 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.633368 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.651740 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663018 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663114 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.663132 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.669173 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.690456 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.709988 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.729814 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.744084 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.763943 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:15Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.766853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.767048 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.869693 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972389 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972438 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:15 crc kubenswrapper[4842]: I0202 06:47:15.972446 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:15Z","lastTransitionTime":"2026-02-02T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075411 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075488 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.075561 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.179247 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282800 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.282819 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386262 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386558 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.386924 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.405604 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:17:01.118713144 +0000 UTC Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.432947 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.433208 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489902 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.489964 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.593875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.593944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.593969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.594001 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.594026 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655288 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.655366 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.678346 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.684893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.707331 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.713894 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.733883 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740460 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740550 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.740592 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.758573 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763636 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.763655 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.785243 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:16Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:16 crc kubenswrapper[4842]: E0202 06:47:16.785454 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788059 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.788175 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893818 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893892 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893918 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.893934 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997391 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:16 crc kubenswrapper[4842]: I0202 06:47:16.997533 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:16Z","lastTransitionTime":"2026-02-02T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100341 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100359 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.100399 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204553 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.204599 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.307899 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.307977 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.307995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.308033 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.308052 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.406303 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:28:52.480184415 +0000 UTC Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411255 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.411304 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.433684 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.433800 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:17 crc kubenswrapper[4842]: E0202 06:47:17.433888 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:17 crc kubenswrapper[4842]: E0202 06:47:17.434328 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.434466 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:17 crc kubenswrapper[4842]: E0202 06:47:17.434607 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.514842 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618159 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.618175 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720949 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.720958 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823588 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.823597 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926285 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:17 crc kubenswrapper[4842]: I0202 06:47:17.926301 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:17Z","lastTransitionTime":"2026-02-02T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030527 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030555 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.030613 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133608 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133627 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.133677 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237200 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237370 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.237483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340859 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.340985 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.341004 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.407302 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:43:54.989643157 +0000 UTC Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.432732 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:18 crc kubenswrapper[4842]: E0202 06:47:18.432946 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444159 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444234 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444243 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.444269 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.552308 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656514 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.656618 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759720 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759785 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.759852 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.862980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.863117 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966640 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:18 crc kubenswrapper[4842]: I0202 06:47:18.966657 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:18Z","lastTransitionTime":"2026-02-02T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.069934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.069993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.070015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.070043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.070064 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.173460 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276245 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276287 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.276304 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380557 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.380583 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.408279 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:38:27.123488053 +0000 UTC Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.433054 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.433316 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.433572 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:19 crc kubenswrapper[4842]: E0202 06:47:19.433633 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:19 crc kubenswrapper[4842]: E0202 06:47:19.433737 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:19 crc kubenswrapper[4842]: E0202 06:47:19.433517 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483626 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483719 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.483799 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586811 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586884 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586937 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.586959 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689849 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689930 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.689943 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.792899 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.896450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.896805 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.896958 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.897104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.897259 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999635 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999661 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:19 crc kubenswrapper[4842]: I0202 06:47:19.999672 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:19Z","lastTransitionTime":"2026-02-02T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.101998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102256 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.102481 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.205956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206203 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.206475 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.308998 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.408943 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:40:21.27410297 +0000 UTC Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.411988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.412084 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.433347 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:20 crc kubenswrapper[4842]: E0202 06:47:20.433529 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.514946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515082 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515111 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.515131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618435 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618525 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618552 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.618572 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720846 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720880 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.720903 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823837 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.823854 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926400 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:20 crc kubenswrapper[4842]: I0202 06:47:20.926853 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:20Z","lastTransitionTime":"2026-02-02T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.029847 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.132766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.133313 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.236815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.237359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.339951 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.339992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.340005 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.340023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.340036 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.409816 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:57:59.287310811 +0000 UTC Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.433148 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.433148 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:21 crc kubenswrapper[4842]: E0202 06:47:21.433308 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:21 crc kubenswrapper[4842]: E0202 06:47:21.433389 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.433386 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:21 crc kubenswrapper[4842]: E0202 06:47:21.433475 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.441799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.441921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.441998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.442071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.442129 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.545839 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.648892 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752052 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.752171 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854796 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.854906 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957321 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:21 crc kubenswrapper[4842]: I0202 06:47:21.957409 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:21Z","lastTransitionTime":"2026-02-02T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.060255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162879 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.162889 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.264992 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265060 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265100 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.265112 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367739 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.367763 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.410386 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:25:13.646848527 +0000 UTC Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.432909 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:22 crc kubenswrapper[4842]: E0202 06:47:22.433394 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.433610 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:22 crc kubenswrapper[4842]: E0202 06:47:22.433960 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.470946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.470998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.471010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.471029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.471046 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.573946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.573995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.574006 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.574024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.574036 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.677986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678324 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.678611 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.781827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782210 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.782359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.885646 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988296 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988626 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:22 crc kubenswrapper[4842]: I0202 06:47:22.988954 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:22Z","lastTransitionTime":"2026-02-02T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091965 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.091981 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.157932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.158268 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.158558 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:47:55.158528284 +0000 UTC m=+100.535796186 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195263 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.195892 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.196017 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306432 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.306496 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.409203 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.411119 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 04:02:28.346541564 +0000 UTC Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.432677 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.432729 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.432880 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.432911 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.433299 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:23 crc kubenswrapper[4842]: E0202 06:47:23.433147 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.511994 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512037 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.512066 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.615391 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.718683 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.821934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.821984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.821994 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.822012 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.822024 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924815 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:23 crc kubenswrapper[4842]: I0202 06:47:23.924893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:23Z","lastTransitionTime":"2026-02-02T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027253 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027324 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027345 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.027392 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.130762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131105 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131121 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.131131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234595 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234615 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.234889 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.337891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.338342 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.411616 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 05:41:50.626416011 +0000 UTC Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.433340 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:24 crc kubenswrapper[4842]: E0202 06:47:24.433573 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440365 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.440444 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543167 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543240 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543270 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.543280 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.646311 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.749813 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852391 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852449 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852468 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.852483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.955205 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:24Z","lastTransitionTime":"2026-02-02T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969068 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/0.log" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969140 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" containerID="8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d" exitCode=1 Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969183 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerDied","Data":"8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d"} Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.969842 4842 scope.go:117] "RemoveContainer" containerID="8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.986015 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:24Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:24 crc kubenswrapper[4842]: I0202 06:47:24.999623 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:24Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.012614 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.023631 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.042082 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.055644 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057459 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.057480 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.067296 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.085550 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.101816 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.116040 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.137103 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.150874 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160639 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.160719 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.164797 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.177136 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.191363 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.206286 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.223801 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264141 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264163 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.264177 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367116 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.367146 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.412629 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:02:53.413457016 +0000 UTC Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.433157 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.433305 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:25 crc kubenswrapper[4842]: E0202 06:47:25.433337 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.433422 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:25 crc kubenswrapper[4842]: E0202 06:47:25.433571 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:25 crc kubenswrapper[4842]: E0202 06:47:25.433789 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.453178 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.469852 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.471393 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.498868 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.518353 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.530983 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.541514 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.560450 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.571560 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577526 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.577590 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.589886 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.602287 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.616008 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.629706 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.643912 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.653518 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.664195 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.676937 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684698 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684809 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.684864 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.690401 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788443 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.788508 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891153 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891277 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.891294 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.975566 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/0.log" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.975657 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d"} Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994348 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:25Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994529 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994581 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994614 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:25 crc kubenswrapper[4842]: I0202 06:47:25.994627 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:25Z","lastTransitionTime":"2026-02-02T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.009198 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.027498 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.049906 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.066092 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.079180 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.093409 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098456 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.098528 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.106261 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.119013 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.136098 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.153645 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.171079 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.187041 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.201678 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202287 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202340 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.202618 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.215679 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.233647 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.246949 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:26Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305529 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305547 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.305587 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.408886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.408955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.408978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.409013 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.409034 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.414089 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 01:17:07.757159976 +0000 UTC Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.432711 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:26 crc kubenswrapper[4842]: E0202 06:47:26.433097 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511175 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511511 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.511529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.614941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615109 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615251 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.615489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.717724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.717841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.717947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.718076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.718159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820507 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.820968 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.821046 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923410 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.923904 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.995990 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996037 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996049 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:26 crc kubenswrapper[4842]: I0202 06:47:26.996079 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:26Z","lastTransitionTime":"2026-02-02T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.008046 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.017683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.017922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.018024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.018099 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.018159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.031083 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034630 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034674 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034701 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.034712 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.046413 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050458 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050496 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050506 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050520 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.050531 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.062321 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066163 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066239 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066260 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.066276 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.082347 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:27Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.082485 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084590 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.084615 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.187971 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188079 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.188137 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.292150 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394472 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.394667 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.416149 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 15:24:40.676097321 +0000 UTC Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.433511 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.433599 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.433728 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.433719 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.435925 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:27 crc kubenswrapper[4842]: E0202 06:47:27.435859 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498325 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.498460 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601132 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601174 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601189 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.601255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703211 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703261 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.703339 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.806407 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:27 crc kubenswrapper[4842]: I0202 06:47:27.909840 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:27Z","lastTransitionTime":"2026-02-02T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012361 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.012379 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.114908 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115019 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.115100 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218316 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.218337 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320583 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.320653 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.416920 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 01:39:31.884751975 +0000 UTC Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.422976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.423167 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.433437 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:28 crc kubenswrapper[4842]: E0202 06:47:28.433624 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.525791 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.628992 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732145 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732177 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.732197 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.835923 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942114 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942198 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942330 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:28 crc kubenswrapper[4842]: I0202 06:47:28.942359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:28Z","lastTransitionTime":"2026-02-02T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046932 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.046948 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149789 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.149822 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254142 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.254299 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357242 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.357368 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.418075 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:19:13.574913351 +0000 UTC Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.432679 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.432756 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:29 crc kubenswrapper[4842]: E0202 06:47:29.432847 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.432719 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:29 crc kubenswrapper[4842]: E0202 06:47:29.432950 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:29 crc kubenswrapper[4842]: E0202 06:47:29.433159 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461435 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.461463 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.564976 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.667873 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668093 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.668231 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771116 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.771181 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874136 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874467 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.874929 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:29 crc kubenswrapper[4842]: I0202 06:47:29.977212 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:29Z","lastTransitionTime":"2026-02-02T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.079910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.080311 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.182984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183148 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.183276 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287174 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287282 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287303 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.287349 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.389927 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.389998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.390023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.390050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.390073 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.418202 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 23:45:09.027383852 +0000 UTC Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.433250 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:30 crc kubenswrapper[4842]: E0202 06:47:30.433422 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.492896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.493514 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.595938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.596261 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.698949 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699240 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699609 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.699754 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802910 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.802930 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:30 crc kubenswrapper[4842]: I0202 06:47:30.906958 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:30Z","lastTransitionTime":"2026-02-02T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.010929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.010991 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.011008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.011032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.011050 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114716 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114795 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.114847 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219674 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.219862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.220046 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.323172 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.323578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.323808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.324038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.324490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.419101 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:51:44.507988361 +0000 UTC Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.427991 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428067 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.428079 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.435095 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:31 crc kubenswrapper[4842]: E0202 06:47:31.435477 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.435341 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.435601 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:31 crc kubenswrapper[4842]: E0202 06:47:31.436283 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:31 crc kubenswrapper[4842]: E0202 06:47:31.436468 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.530920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.531772 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635094 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635143 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.635165 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.738779 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841528 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841625 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.841729 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.945929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.945997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.946020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.946054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:31 crc kubenswrapper[4842]: I0202 06:47:31.946080 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:31Z","lastTransitionTime":"2026-02-02T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049462 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.049987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.050053 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.153715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154502 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.154608 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.258093 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361716 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.361757 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.420326 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 21:10:35.244749299 +0000 UTC Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.432700 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:32 crc kubenswrapper[4842]: E0202 06:47:32.432905 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.464918 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465330 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465491 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.465807 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.568987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569060 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.569070 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672335 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.672353 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774461 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.774599 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878579 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878599 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.878650 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981641 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981694 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:32 crc kubenswrapper[4842]: I0202 06:47:32.981715 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:32Z","lastTransitionTime":"2026-02-02T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084377 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084421 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.084448 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187416 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187686 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187847 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.187995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.188133 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.291776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292261 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292741 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.292895 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396308 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396337 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.396390 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.420501 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:55:01.981234801 +0000 UTC Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.433022 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:33 crc kubenswrapper[4842]: E0202 06:47:33.433833 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.433088 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.433052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:33 crc kubenswrapper[4842]: E0202 06:47:33.434388 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:33 crc kubenswrapper[4842]: E0202 06:47:33.434446 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.450401 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499418 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499475 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499516 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.499536 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603129 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603243 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603265 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.603314 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.713944 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.817305 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.817691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.817854 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.818048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.818200 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.921352 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.921794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.922063 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.922318 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:33 crc kubenswrapper[4842]: I0202 06:47:33.922548 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:33Z","lastTransitionTime":"2026-02-02T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.025785 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.128760 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.128825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.129032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.129048 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.129058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232273 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.232428 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336145 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336166 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.336213 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.421612 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:06:00.891389633 +0000 UTC Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.433369 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:34 crc kubenswrapper[4842]: E0202 06:47:34.433578 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439744 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.439800 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542892 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542917 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.542934 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646776 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.646855 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749275 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749295 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.749395 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853687 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853773 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.853815 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.956619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:34 crc kubenswrapper[4842]: I0202 06:47:34.957545 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:34Z","lastTransitionTime":"2026-02-02T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061662 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.061997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.062159 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165865 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165885 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.165929 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269742 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.269898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.270040 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373350 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373377 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.373395 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.422141 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:05:38.586386205 +0000 UTC Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.433049 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.433093 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:35 crc kubenswrapper[4842]: E0202 06:47:35.433415 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.433452 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:35 crc kubenswrapper[4842]: E0202 06:47:35.434171 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:35 crc kubenswrapper[4842]: E0202 06:47:35.434460 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.453756 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.474750 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477732 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477760 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.477811 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.490932 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.509242 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.529183 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.556619 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.580196 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582750 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582768 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.582813 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.601550 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.621842 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.639092 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.654729 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.678097 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686514 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.686628 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.687255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.696908 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.711918 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.726825 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.744712 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.763500 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.787408 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:35Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790581 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790651 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.790681 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893975 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.893985 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.997509 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.997947 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.998149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.998360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:35 crc kubenswrapper[4842]: I0202 06:47:35.998507 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:35Z","lastTransitionTime":"2026-02-02T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.102417 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.102929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.103024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.103123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.103271 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206554 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206600 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206627 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.206639 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310150 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.310699 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.413199 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.423261 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:38:02.626231697 +0000 UTC Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.432952 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:36 crc kubenswrapper[4842]: E0202 06:47:36.433655 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.434056 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517715 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.517896 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.620822 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.620858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.621066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.621091 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.621101 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724011 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724085 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.724131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826812 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826848 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.826863 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930363 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930383 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:36 crc kubenswrapper[4842]: I0202 06:47:36.930397 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:36Z","lastTransitionTime":"2026-02-02T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.021783 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.031014 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.031600 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036299 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036395 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.036515 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.051911 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.065301 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.077720 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.099133 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.123176 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138848 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.138886 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.139571 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.169911 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.189972 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.212443 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.228167 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241801 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241851 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241866 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.241898 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.243838 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.256926 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.271156 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.287404 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.300549 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.314512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.325197 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.336167 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343498 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343539 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343554 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.343588 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371869 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.371888 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.385202 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.390998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.391139 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.411784 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420569 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.420680 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.423639 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:44:52.710145001 +0000 UTC Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.432537 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.432731 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.433067 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.433160 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.433385 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.433477 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.446268 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450535 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450555 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.450571 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.472242 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.475786 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.494476 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:37Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:37 crc kubenswrapper[4842]: E0202 06:47:37.494600 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496263 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496293 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.496304 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599499 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.599515 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702152 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702174 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.702184 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805071 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805124 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.805178 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908728 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908881 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:37 crc kubenswrapper[4842]: I0202 06:47:37.908899 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:37Z","lastTransitionTime":"2026-02-02T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012290 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012321 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.012343 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.037182 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.038424 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/2.log" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.043542 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" exitCode=1 Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.043600 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.043648 4842 scope.go:117] "RemoveContainer" containerID="d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.049258 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:47:38 crc kubenswrapper[4842]: E0202 06:47:38.049555 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.071823 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.093327 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.112114 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115586 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.115736 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.148282 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d585d3e8eec9311b405eb6943ad400b0dbfbd148b44b0279eb6feff8b4090951\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"message\\\":\\\"F0202 06:47:06.480989 6477 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:06Z is after 2025-08-24T17:21:41Z]\\\\nI0202 06:47:06.480978 6477 services_controller.go:451] Built service openshift-kube-controller-manager/kube-controller-manager cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-controller-manager/kube-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-controller-manager/kube-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.171883 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.190938 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.207552 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.218435 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.228878 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.252120 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.270428 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.288339 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.304523 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322541 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322600 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.322660 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.324979 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.342121 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.358544 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.373089 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.386379 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.406834 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:38Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.423977 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:06:19.245982854 +0000 UTC Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426323 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426415 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.426433 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.432539 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:38 crc kubenswrapper[4842]: E0202 06:47:38.432756 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528751 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.528798 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.631923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632033 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.632089 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734616 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734672 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.734724 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838428 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838455 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.838473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941697 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941769 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:38 crc kubenswrapper[4842]: I0202 06:47:38.941831 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:38Z","lastTransitionTime":"2026-02-02T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044562 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.044622 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.050749 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.056354 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.056878 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.076664 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.094079 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.109720 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.128825 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149297 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149361 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.149423 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.154015 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.176020 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.195307 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.225436 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.249424 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253182 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.253272 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.272907 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277264 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277557 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.277637 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.277830 4842 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.277922 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.277890599 +0000 UTC m=+148.655158561 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278047 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.278026192 +0000 UTC m=+148.655294144 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278178 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278209 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278278 4842 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278344 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.278323029 +0000 UTC m=+148.655590981 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278426 4842 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.278505 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.278486593 +0000 UTC m=+148.655754555 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.291330 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.313115 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.332512 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.352661 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357804 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357882 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357900 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357928 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.357946 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.374251 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.378326 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378539 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378580 4842 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378603 4842 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.378698 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.378667781 +0000 UTC m=+148.755935733 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.393115 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.417031 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.424292 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:28:11.040305754 +0000 UTC Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.433526 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.433616 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.433773 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.433939 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.434166 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:39 crc kubenswrapper[4842]: E0202 06:47:39.434341 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.435342 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:39Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460887 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.460989 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564148 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564255 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564275 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.564318 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667473 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.667492 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.770896 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874676 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.874688 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979114 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:39 crc kubenswrapper[4842]: I0202 06:47:39.979208 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:39Z","lastTransitionTime":"2026-02-02T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082127 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082553 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.082967 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188622 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.188664 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.292658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.292967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.292984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.293009 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.293027 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.395987 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396081 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.396131 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.424822 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:20:24.771503043 +0000 UTC Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.433388 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:40 crc kubenswrapper[4842]: E0202 06:47:40.433612 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498638 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.498717 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601252 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.601394 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705648 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.705705 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809179 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809287 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809334 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.809352 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912491 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912556 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912606 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:40 crc kubenswrapper[4842]: I0202 06:47:40.912625 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:40Z","lastTransitionTime":"2026-02-02T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.021413 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.124999 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125062 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.125113 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230106 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230134 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.230156 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333399 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333464 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333481 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333505 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.333525 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.425799 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:43:04.201167061 +0000 UTC Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.433285 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.433353 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:41 crc kubenswrapper[4842]: E0202 06:47:41.433491 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.433554 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:41 crc kubenswrapper[4842]: E0202 06:47:41.433739 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:41 crc kubenswrapper[4842]: E0202 06:47:41.433823 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436400 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436434 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.436447 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539909 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.539953 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643458 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.643480 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747382 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.747404 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850077 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850211 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.850259 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953660 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953789 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:41 crc kubenswrapper[4842]: I0202 06:47:41.953845 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:41Z","lastTransitionTime":"2026-02-02T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057724 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.057780 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161021 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161246 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161274 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.161328 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265788 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265814 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265846 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.265885 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368823 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368894 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.368941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.426934 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:50:27.123580124 +0000 UTC Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.433359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:42 crc kubenswrapper[4842]: E0202 06:47:42.433564 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.472934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473042 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.473442 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576650 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576735 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.576797 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680781 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680800 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.680843 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783675 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783734 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.783789 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.886941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887043 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.887097 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990236 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990296 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990307 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:42 crc kubenswrapper[4842]: I0202 06:47:42.990339 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:42Z","lastTransitionTime":"2026-02-02T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.093986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.094096 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197304 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197357 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.197398 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300726 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300782 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300823 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.300840 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404756 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404787 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.404802 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.427744 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:05:17.273484153 +0000 UTC Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.433210 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.433275 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.433621 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:43 crc kubenswrapper[4842]: E0202 06:47:43.433822 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:43 crc kubenswrapper[4842]: E0202 06:47:43.433999 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:43 crc kubenswrapper[4842]: E0202 06:47:43.434359 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508169 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508289 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508310 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.508355 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610897 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.610957 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713705 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713777 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.713846 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816185 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816286 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816313 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.816364 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919730 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919850 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:43 crc kubenswrapper[4842]: I0202 06:47:43.919871 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:43Z","lastTransitionTime":"2026-02-02T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022265 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022291 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022320 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.022343 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125676 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125700 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.125718 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228402 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.228422 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331744 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331763 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331791 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.331809 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.428145 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:58:01.504770272 +0000 UTC Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.432660 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:44 crc kubenswrapper[4842]: E0202 06:47:44.432957 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.434907 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.538366 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.538870 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.539068 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.539294 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.539495 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642855 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.642998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.643053 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.745654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746380 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.746572 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848657 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848668 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848684 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.848695 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958002 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958692 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958714 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:44 crc kubenswrapper[4842]: I0202 06:47:44.958728 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:44Z","lastTransitionTime":"2026-02-02T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.061771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.062752 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166934 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.166977 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.270486 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.270941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.271154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.271375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.271587 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.375521 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.375931 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.376257 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.376445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.376632 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.429184 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:42:21.544717261 +0000 UTC Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.432516 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.432775 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.432561 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:45 crc kubenswrapper[4842]: E0202 06:47:45.432997 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:45 crc kubenswrapper[4842]: E0202 06:47:45.433068 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:45 crc kubenswrapper[4842]: E0202 06:47:45.433266 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.445749 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.465560 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481765 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481791 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481824 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.481850 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.483195 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.501901 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.519162 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.534021 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.553049 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.570149 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583915 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583967 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.583986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.584000 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.588403 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.603435 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.634530 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.653179 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.670511 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.686912 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.686974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.686991 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.687015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.687032 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.687435 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.704913 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.726158 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.745619 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.765569 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:45Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789859 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789911 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789945 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.789958 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893619 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893714 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:45 crc kubenswrapper[4842]: I0202 06:47:45.893732 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996972 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.996995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:45.997013 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:45Z","lastTransitionTime":"2026-02-02T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.100836 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204729 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204797 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.204863 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308628 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308687 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308725 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.308742 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412675 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412703 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.412721 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.430009 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:26:00.759434323 +0000 UTC Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.433462 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:46 crc kubenswrapper[4842]: E0202 06:47:46.433632 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515455 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515477 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.515490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619097 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619115 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619138 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.619155 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722589 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.722649 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825871 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.825932 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929407 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929434 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:46 crc kubenswrapper[4842]: I0202 06:47:46.929484 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:46Z","lastTransitionTime":"2026-02-02T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032753 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032826 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.032909 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136111 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.136237 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239493 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239559 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.239571 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342836 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.342847 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.430656 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:01:00.808040108 +0000 UTC Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.433185 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.433264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.433397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.433506 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.433813 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.433691 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.445998 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446099 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446125 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.446142 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549316 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.549362 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.652639 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756577 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756654 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756677 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.756729 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816946 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816976 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.816998 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.838993 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843827 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843867 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.843886 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.861262 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.870793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.871440 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.871642 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.871918 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.872110 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.891042 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.896970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897074 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.897097 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.915066 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.920933 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921006 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.921083 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.937204 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:47Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:47 crc kubenswrapper[4842]: E0202 06:47:47.938316 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941053 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941120 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941144 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:47 crc kubenswrapper[4842]: I0202 06:47:47.941195 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:47Z","lastTransitionTime":"2026-02-02T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044206 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044318 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.044355 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147293 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147314 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.147359 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250670 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250694 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250721 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.250739 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353107 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353195 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353268 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.353302 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.431752 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:08:43.790204903 +0000 UTC Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.433105 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:48 crc kubenswrapper[4842]: E0202 06:47:48.433345 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456561 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456624 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456646 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.456666 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559819 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559829 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559843 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.559853 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662246 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662308 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662351 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.662370 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765400 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765474 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765492 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765518 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.765538 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.868905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.868979 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.869002 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.869032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.869054 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972540 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972615 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972653 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:48 crc kubenswrapper[4842]: I0202 06:47:48.972708 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:48Z","lastTransitionTime":"2026-02-02T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.075939 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.075997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.076016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.076040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.076058 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179044 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179157 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.179169 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281853 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.281893 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.384757 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.432718 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 21:11:45.283241535 +0000 UTC Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.432937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:49 crc kubenswrapper[4842]: E0202 06:47:49.433044 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.433183 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:49 crc kubenswrapper[4842]: E0202 06:47:49.433336 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.433576 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:49 crc kubenswrapper[4842]: E0202 06:47:49.433838 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.488862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.489207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.489515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.490204 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.490472 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593281 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.593371 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.696963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697030 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.697095 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.799825 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800024 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800200 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.800529 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904034 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904080 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904097 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:49 crc kubenswrapper[4842]: I0202 06:47:49.904141 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:49Z","lastTransitionTime":"2026-02-02T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006754 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006778 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.006800 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.109988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110066 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110085 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110685 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.110750 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213735 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213859 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.213926 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322288 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322318 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.322339 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.425820 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426091 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426476 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.426687 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.433296 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 22:28:54.667052284 +0000 UTC Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.433457 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:50 crc kubenswrapper[4842]: E0202 06:47:50.433636 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529376 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529426 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529470 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.529489 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.631864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632164 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632643 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.632817 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.735922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.735986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.736010 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.736038 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.736062 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839308 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839359 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839375 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839397 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.839413 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941775 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941818 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:50 crc kubenswrapper[4842]: I0202 06:47:50.941837 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:50Z","lastTransitionTime":"2026-02-02T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044661 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044733 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044784 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.044804 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147621 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147640 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.147685 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251143 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251167 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251196 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.251243 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.353959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354031 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354061 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.354160 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.432682 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.432721 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:51 crc kubenswrapper[4842]: E0202 06:47:51.433008 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:51 crc kubenswrapper[4842]: E0202 06:47:51.433079 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.433317 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:51 crc kubenswrapper[4842]: E0202 06:47:51.433570 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.433612 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:34:27.928649099 +0000 UTC Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.457673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.457986 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.458130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.458349 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.458517 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.561799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562372 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562597 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.562800 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666508 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.666621 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770203 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770283 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770301 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770327 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.770350 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872933 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872950 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.872974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.873000 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:51 crc kubenswrapper[4842]: I0202 06:47:51.980493 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:51Z","lastTransitionTime":"2026-02-02T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083770 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083907 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.083922 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.186941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290483 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290583 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290610 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.290628 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394534 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.394706 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.432912 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:52 crc kubenswrapper[4842]: E0202 06:47:52.433380 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.433907 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:27:40.014812412 +0000 UTC Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498300 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498398 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498424 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.498442 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601613 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601666 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601683 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601707 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.601724 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705354 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705413 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705429 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705452 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.705469 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808367 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.808386 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911755 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911816 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:52 crc kubenswrapper[4842]: I0202 06:47:52.911876 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:52Z","lastTransitionTime":"2026-02-02T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015732 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015808 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015832 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015862 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.015881 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119166 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119284 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119309 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119340 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.119363 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.222904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.222963 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.222980 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.223003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.223024 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326181 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326199 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326277 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.326297 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429291 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429362 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.429433 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.432504 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.432606 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.432669 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.432706 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.433009 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.433111 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.434162 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:47:53 crc kubenswrapper[4842]: E0202 06:47:53.434437 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.434512 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:40:37.36169692 +0000 UTC Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533447 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533512 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533536 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533562 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.533582 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.637890 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638500 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638549 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638584 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.638608 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741332 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741396 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.741751 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.844969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845100 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.845119 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948553 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948613 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948633 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:53 crc kubenswrapper[4842]: I0202 06:47:53.948677 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:53Z","lastTransitionTime":"2026-02-02T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051673 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051787 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.051804 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155515 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155615 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.155666 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259929 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.259993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.260018 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363272 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363319 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363329 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363347 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.363360 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.433479 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:54 crc kubenswrapper[4842]: E0202 06:47:54.433654 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.434971 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:06:13.98594947 +0000 UTC Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466532 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466612 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.466648 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569592 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569664 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569682 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569709 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.569728 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.671920 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.672095 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775531 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775566 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.775590 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879192 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879315 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879335 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879369 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.879396 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983202 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983281 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983298 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983322 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:54 crc kubenswrapper[4842]: I0202 06:47:54.983340 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:54Z","lastTransitionTime":"2026-02-02T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086804 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086877 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086896 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086923 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.086941 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190097 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190149 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190160 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190177 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.190191 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.256407 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.256665 4842 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.256777 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs podName:4f6c3b51-669c-4c7b-a23a-ed68d139849e nodeName:}" failed. No retries permitted until 2026-02-02 06:48:59.256748463 +0000 UTC m=+164.634016385 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs") pod "network-metrics-daemon-9chjr" (UID: "4f6c3b51-669c-4c7b-a23a-ed68d139849e") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294394 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294456 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294478 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.294522 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.398901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.398966 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.398984 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.399017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.399035 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.433471 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.433662 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.433905 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.434042 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.434191 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:55 crc kubenswrapper[4842]: E0202 06:47:55.435060 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.435161 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:33:09.009243815 +0000 UTC Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.452508 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd724c8c-3a6c-47c0-9d98-a09e1f19a0d3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea7dbf1797f2a83822169cca574352b936c2fd78e0e5257f9ae0736e130a031\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fbde4efa36cc96dc3fe73b43d210dbf5959c4451faa716a026655924c9cd37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wlzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gkdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.470727 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9chjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f6c3b51-669c-4c7b-a23a-ed68d139849e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5htc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9chjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.491577 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a52fecd8-6250-4bb6-bd2d-5f882a228ccd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0202 06:46:28.976113 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0202 06:46:28.978175 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1628440862/tls.crt::/tmp/serving-cert-1628440862/tls.key\\\\\\\"\\\\nI0202 06:46:35.182430 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0202 06:46:35.192382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0202 06:46:35.192426 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0202 06:46:35.192472 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0202 06:46:35.192483 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0202 06:46:35.211443 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0202 06:46:35.211493 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0202 06:46:35.211517 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0202 06:46:35.211524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0202 06:46:35.211532 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0202 06:46:35.211540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0202 06:46:35.211970 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0202 06:46:35.213997 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501798 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501888 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501907 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501961 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.501979 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.512609 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61847fe8ae8ed6f549cc28c149d7c2fd263d5a68d1afec88d823f1903a5c077\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b2d8e4c3f2f608bb4b87da4df357853aacbc6b2b0c67ab8a81afac9632a9978\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.532479 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.565483 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f1e4f7c-d788-428b-bea6-e862234bfc59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:37Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0202 06:47:37.456333 6892 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0202 06:47:37.456337 6892 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0202 06:47:37.456374 6892 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0202 06:47:37.456388 6892 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0202 06:47:37.456407 6892 factory.go:656] Stopping watch factory\\\\nI0202 06:47:37.456419 6892 ovnkube.go:599] Stopped ovnkube\\\\nI0202 06:47:37.456444 6892 handler.go:208] Removed *v1.Node event handler 2\\\\nI0202 06:47:37.456451 6892 handler.go:208] Removed *v1.Node event handler 7\\\\nI0202 06:47:37.456458 6892 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0202 06:47:37.456463 6892 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0202 06:47:37.456473 6892 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0202 06:47:37.456479 6892 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0202 06:47:37.456485 6892 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0202 06:47:37.456490 6892 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0202 06:47:37.456499 6892 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0202 06:47:37.456549 6892 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:47:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qdmbp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-njnbq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.590299 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a55bc304-5cb2-4f7f-83b9-09d8188c73f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22b28fd738242f9d2e9c6a09d813c00242414570ab7bc607067234efdf694b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c829a191f970a16cdde8801a096cceecb82473ce844c47593a96b3d8f9813b09\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b7459bc3cdbef613c36f36c1b34a7ce386522137d231f5953620f6890b9aa75\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10df3f921fb93db9c67bc852f34cb23860ae5cfc1fa3a8d8778a0fbcfe79cbaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82d265c14912564221b9837788b2514f5df1ed13f55750f2e3ce74ffb617d2aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfd483da6caafcb2a3463ab7f6433b36b36be085fa19d87b863186fb52120017\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34ebec9b80a159be20612ae5f57b4b106e862c510db501f4abca5a6085b701e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-475lt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j7rrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605845 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605925 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.605970 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.610949 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b888f8bf-78c9-4e73-bfa5-521f549b345e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bea776dbb154f5435006d46f8f410c0b0cb8c955f594cf39e4b707d4d99e619\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356ee9ccf90dd6a4aade1846889e97e195457f8a54c572eb8c8fd216fb5315f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87f2d3d4011b1076ea5c6892ec39059c3c43c73860bae0828cd0fa3b2c86cccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9cbe20ee565f166ee370b8e91aaea139e1d637016c3c84e4a67dba562fe735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.631688 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a63071a029db969427a2f92e2cbf54e3d4947e81212641175629e4ccdf5b724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.648369 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cdf7907-fc51-4fc8-8cd3-5a90a72cc0e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02e0a8355ba524fc2aaaf4ceb6c28d2560fcc506a7159f80193563692812f3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eedd0bd7e5b861fdac2d584e9a2854d8936e487a22fbee9364b4203fc22d1205\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-02T06:46:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.667397 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.686095 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dc3485b1d9b8d11113c697c7cf1fba2e5b185bb7d212c90b3e298e10aca1fe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.702851 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cc6e593-198e-4709-9026-103f892be5ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://748ac40bed3563a0effe55e00da160f6c2fec66c19d70984f781512bc790f457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kqr8f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p5hqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713207 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.713381 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.729321 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gmkx9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c1fd21cd-ea6a-44a0-b136-f338fc97cf18\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-02T06:47:24Z\\\",\\\"message\\\":\\\"2026-02-02T06:46:38+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2\\\\n2026-02-02T06:46:38+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8bd0bcd-320d-4fb7-9489-b7dfac67e5c2 to /host/opt/cni/bin/\\\\n2026-02-02T06:46:39Z [verbose] multus-daemon started\\\\n2026-02-02T06:46:39Z [verbose] Readiness Indicator file check\\\\n2026-02-02T06:47:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:47:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k4nf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gmkx9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.750264 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d63607b5-4c6a-4784-987b-9e3cfcd777e1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e129340c823de1ca31188a10d3eab9745dfed191cfbfd84d32963312b652931b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d53d4cef00a992b4b22bc306c416fd71c28fbe55e7182f935a58047e5ce65dd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f99584dd74a21abb6d81710ff91d950d4f4dfe5e60c5b888e15c97fa0d6a5588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:15Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.775404 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:35Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.790689 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-q2xjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"110e0716-4e1c-49a1-acbb-016312fdb070\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://172de917fae38543467d803bf10b7799dd43f1d8c8a7bc8d9e3ed67a6cd3eec6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4jq8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-q2xjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.806484 4842 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-ms7n2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f026f084-0079-47a5-906c-14eb439eaa86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-02T06:46:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a3ef9354c178bcc7190ba120acad57695349a63dd658ba0ec83f35a3dcf1e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-02T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7tn4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-02T06:46:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-ms7n2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:55Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816739 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816772 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816802 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.816826 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920073 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920130 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920170 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:55 crc kubenswrapper[4842]: I0202 06:47:55.920186 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:55Z","lastTransitionTime":"2026-02-02T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023324 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023407 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023433 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.023451 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126573 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.126635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230333 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230405 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230454 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.230473 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.332941 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.332997 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.333014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.333041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.333059 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.433119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:56 crc kubenswrapper[4842]: E0202 06:47:56.433365 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435551 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:42:56.360446117 +0000 UTC Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435706 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435790 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435846 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.435873 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539051 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539146 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539176 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.539194 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.641968 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642086 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.642109 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745233 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745544 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745691 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.745771 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849306 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849556 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849684 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.849748 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952669 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952690 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952716 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:56 crc kubenswrapper[4842]: I0202 06:47:56.952735 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:56Z","lastTransitionTime":"2026-02-02T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.055921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.055979 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.055996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.056022 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.056062 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159045 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159135 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159158 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159184 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.159205 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262688 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262748 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262766 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262799 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.262822 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365679 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365696 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365720 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.365736 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.433553 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.434620 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.434787 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:57 crc kubenswrapper[4842]: E0202 06:47:57.434900 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:57 crc kubenswrapper[4842]: E0202 06:47:57.434774 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:57 crc kubenswrapper[4842]: E0202 06:47:57.435013 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.436053 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 16:42:07.177750561 +0000 UTC Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.457127 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468839 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468860 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468884 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.468905 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572123 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572212 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.572288 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675710 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675781 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675795 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.675833 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.778807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.778897 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.778922 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.779020 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.779051 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.882970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883050 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883069 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.883114 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986430 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986501 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986524 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986556 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:57 crc kubenswrapper[4842]: I0202 06:47:57.986583 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:57Z","lastTransitionTime":"2026-02-02T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090517 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090618 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.090716 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.193889 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.193995 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.194016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.194046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.194065 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298029 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298121 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298150 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.298170 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329686 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329762 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.329836 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.351143 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357017 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.357174 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.379944 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387276 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387355 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387374 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387401 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.387420 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.410451 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417545 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417631 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417652 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417721 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.417741 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.433460 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.433625 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.436647 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:18:01.385320899 +0000 UTC Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.443950 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450133 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450180 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.450200 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.472544 4842 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-02T06:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"46282451-0a80-4a55-be60-279b5a40f455\\\",\\\"systemUUID\\\":\\\"a2d9b7d5-4deb-436c-8c47-643b2c87256c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-02T06:47:58Z is after 2025-08-24T17:21:41Z" Feb 02 06:47:58 crc kubenswrapper[4842]: E0202 06:47:58.472787 4842 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475689 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475740 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.475761 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.578876 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.578969 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.578988 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.579023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.579042 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682364 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682432 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682446 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682472 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.682490 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786546 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786595 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786623 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.786635 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890414 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890490 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890538 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.890556 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.993893 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.993960 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.993977 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.994004 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:58 crc kubenswrapper[4842]: I0202 06:47:58.994024 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:58Z","lastTransitionTime":"2026-02-02T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097168 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097186 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097213 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.097255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200737 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.200761 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304338 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304425 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304451 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.304498 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.408476 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.408576 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.408594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.409074 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.409122 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.432705 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:47:59 crc kubenswrapper[4842]: E0202 06:47:59.432860 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.433077 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:47:59 crc kubenswrapper[4842]: E0202 06:47:59.433123 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.433264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:47:59 crc kubenswrapper[4842]: E0202 06:47:59.433315 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.437127 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 22:20:58.457188597 +0000 UTC Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519039 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519104 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519117 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519140 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.519158 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622379 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622453 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.622483 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726343 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726406 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726419 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726442 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.726455 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830784 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830803 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.830815 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934693 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934757 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934768 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934793 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:47:59 crc kubenswrapper[4842]: I0202 06:47:59.934806 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:47:59Z","lastTransitionTime":"2026-02-02T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037905 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037935 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.037948 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141504 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141578 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141598 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141629 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.141649 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245002 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245062 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245099 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.245115 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347878 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.347985 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.348002 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.433169 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:00 crc kubenswrapper[4842]: E0202 06:48:00.433408 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.438286 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:22:41.196169222 +0000 UTC Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450522 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450593 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450613 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.450626 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.553914 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.553973 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.553996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.554027 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.554049 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656901 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656954 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656964 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.656997 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760112 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760128 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760155 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.760171 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863883 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.863988 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967774 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:00 crc kubenswrapper[4842]: I0202 06:48:00.967840 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:00Z","lastTransitionTime":"2026-02-02T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071712 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071891 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071959 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.071977 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174574 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174637 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174656 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.174701 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277696 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277747 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277758 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277777 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.277790 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381373 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381486 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.381506 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.433050 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.433074 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:01 crc kubenswrapper[4842]: E0202 06:48:01.433379 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.433424 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:01 crc kubenswrapper[4842]: E0202 06:48:01.433558 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:01 crc kubenswrapper[4842]: E0202 06:48:01.433771 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.438389 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:23:17.050143336 +0000 UTC Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483882 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483955 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483974 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.483996 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.484014 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587601 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587680 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587708 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.587729 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.690964 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691054 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.691070 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795087 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795108 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795137 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.795161 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898771 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898834 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898857 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.898886 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:01 crc kubenswrapper[4842]: I0202 06:48:01.899030 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:01Z","lastTransitionTime":"2026-02-02T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002832 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002916 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002978 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.002999 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107519 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107659 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107814 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.107953 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211465 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211568 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.211630 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314321 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314385 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314412 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314439 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.314460 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417731 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417809 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417833 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.417850 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.432741 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:02 crc kubenswrapper[4842]: E0202 06:48:02.433006 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.438930 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:09:26.213153981 +0000 UTC Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521466 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521544 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521565 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521594 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.521614 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625161 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625325 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.625373 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728147 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728271 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728292 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728348 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.728368 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831563 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831720 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831770 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.831818 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.935913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936008 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936093 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:02 crc kubenswrapper[4842]: I0202 06:48:02.936162 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:02Z","lastTransitionTime":"2026-02-02T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040602 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040696 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040719 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040752 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.040776 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.143983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144047 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144065 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144089 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.144109 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.246962 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247090 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.247113 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350269 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350339 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350364 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350392 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.350414 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.433651 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.433694 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.433709 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:03 crc kubenswrapper[4842]: E0202 06:48:03.433866 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:03 crc kubenswrapper[4842]: E0202 06:48:03.434006 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:03 crc kubenswrapper[4842]: E0202 06:48:03.434402 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.439037 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:36:40.648200001 +0000 UTC Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.452970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453023 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453040 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453064 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.453085 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556736 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556810 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556828 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556856 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.556875 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660084 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660137 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660154 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660194 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.660213 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764007 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764096 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764122 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.764142 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.866951 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867437 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867580 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867722 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.867880 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.970983 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:03 crc kubenswrapper[4842]: I0202 06:48:03.971038 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:03Z","lastTransitionTime":"2026-02-02T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074627 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074644 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074671 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.074688 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.177617 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178283 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178531 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.178745 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282537 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282591 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282620 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.282651 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.386058 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.386919 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.387072 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.387241 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.387384 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.432835 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:04 crc kubenswrapper[4842]: E0202 06:48:04.433360 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.440061 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:01:35.196423378 +0000 UTC Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491113 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491173 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491247 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.491266 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.594924 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.594993 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.595016 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.595046 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.595072 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698326 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698450 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698469 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698494 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.698514 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801904 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801921 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801944 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.801962 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905647 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905699 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905717 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:04 crc kubenswrapper[4842]: I0202 06:48:04.905763 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:04Z","lastTransitionTime":"2026-02-02T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008288 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008360 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008385 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.008426 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111497 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111564 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111582 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111607 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.111627 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214603 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214663 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214681 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214704 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.214725 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317331 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317390 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317408 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317431 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.317449 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.420937 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421056 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421076 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421101 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.421118 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.432910 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:05 crc kubenswrapper[4842]: E0202 06:48:05.433056 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.433321 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:05 crc kubenswrapper[4842]: E0202 06:48:05.433422 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.433484 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:05 crc kubenswrapper[4842]: E0202 06:48:05.433683 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.446453 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:17:56.485527186 +0000 UTC Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.520486 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.520457035 podStartE2EDuration="59.520457035s" podCreationTimestamp="2026-02-02 06:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.496277841 +0000 UTC m=+110.873545763" watchObservedRunningTime="2026-02-02 06:48:05.520457035 +0000 UTC m=+110.897724987" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523312 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523371 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523413 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.523432 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.556244 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podStartSLOduration=90.556191237 podStartE2EDuration="1m30.556191237s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.556072275 +0000 UTC m=+110.933340197" watchObservedRunningTime="2026-02-02 06:48:05.556191237 +0000 UTC m=+110.933459169" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.590767 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gmkx9" podStartSLOduration=90.590744591 podStartE2EDuration="1m30.590744591s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.575093533 +0000 UTC m=+110.952361456" watchObservedRunningTime="2026-02-02 06:48:05.590744591 +0000 UTC m=+110.968012503" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.606571 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.606550353 podStartE2EDuration="32.606550353s" podCreationTimestamp="2026-02-02 06:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.591078839 +0000 UTC m=+110.968346751" watchObservedRunningTime="2026-02-02 06:48:05.606550353 +0000 UTC m=+110.983818265" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.618667 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-q2xjl" podStartSLOduration=90.618638254 podStartE2EDuration="1m30.618638254s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.617519697 +0000 UTC m=+110.994787639" watchObservedRunningTime="2026-02-02 06:48:05.618638254 +0000 UTC m=+110.995906206" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626270 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626311 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626323 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626342 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.626355 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.630987 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ms7n2" podStartSLOduration=89.630965952 podStartE2EDuration="1m29.630965952s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.63047498 +0000 UTC m=+111.007742912" watchObservedRunningTime="2026-02-02 06:48:05.630965952 +0000 UTC m=+111.008233864" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.646273 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=84.646250151 podStartE2EDuration="1m24.646250151s" podCreationTimestamp="2026-02-02 06:46:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.645859831 +0000 UTC m=+111.023127783" watchObservedRunningTime="2026-02-02 06:48:05.646250151 +0000 UTC m=+111.023518063" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728205 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728233 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728250 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.728261 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.764571 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-j7rrg" podStartSLOduration=90.764549116 podStartE2EDuration="1m30.764549116s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.76390905 +0000 UTC m=+111.141176962" watchObservedRunningTime="2026-02-02 06:48:05.764549116 +0000 UTC m=+111.141817038" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.784506 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gkdfm" podStartSLOduration=89.784485867 podStartE2EDuration="1m29.784485867s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.783828821 +0000 UTC m=+111.161096743" watchObservedRunningTime="2026-02-02 06:48:05.784485867 +0000 UTC m=+111.161753789" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830658 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830761 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.830792 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.842593 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.842576559 podStartE2EDuration="8.842576559s" podCreationTimestamp="2026-02-02 06:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.842064776 +0000 UTC m=+111.219332698" watchObservedRunningTime="2026-02-02 06:48:05.842576559 +0000 UTC m=+111.219844461" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933482 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933551 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933571 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933596 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:05 crc kubenswrapper[4842]: I0202 06:48:05.933615 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:05Z","lastTransitionTime":"2026-02-02T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036665 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036764 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036783 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.036826 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.139973 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140041 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140059 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140085 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.140106 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243353 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243423 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243441 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243471 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.243488 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346078 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346143 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346162 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346191 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.346209 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.432732 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:06 crc kubenswrapper[4842]: E0202 06:48:06.432958 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.446815 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 23:56:33.488422988 +0000 UTC Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449267 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449336 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449358 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449388 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.449407 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552711 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552768 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552785 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552809 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.552829 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656139 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656264 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656289 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656334 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.656361 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759948 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759970 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.759999 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.760018 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863744 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863817 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863835 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.863879 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966743 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966841 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966863 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966895 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:06 crc kubenswrapper[4842]: I0202 06:48:06.966916 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:06Z","lastTransitionTime":"2026-02-02T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070572 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070628 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070645 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070668 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.070687 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173156 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173258 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173279 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173303 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.173320 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276732 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276807 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276830 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.276888 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381387 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381457 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381480 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381510 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.381531 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.433603 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.433749 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:07 crc kubenswrapper[4842]: E0202 06:48:07.433816 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.433909 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:07 crc kubenswrapper[4842]: E0202 06:48:07.434126 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:07 crc kubenswrapper[4842]: E0202 06:48:07.434286 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.447646 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:46:14.331735856 +0000 UTC Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484792 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484844 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484861 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484885 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.484903 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588718 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588813 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588842 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588875 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.588897 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692702 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692770 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692794 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692821 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.692837 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796095 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796165 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796193 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796278 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.796301 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899434 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899503 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899542 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899604 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:07 crc kubenswrapper[4842]: I0202 06:48:07.899629 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:07Z","lastTransitionTime":"2026-02-02T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002780 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002852 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002871 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002898 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.002920 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106055 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106092 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106103 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106119 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.106132 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209302 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209393 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209418 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209445 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.209469 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312864 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312938 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312956 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.312982 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.313000 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.416942 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417014 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417032 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417057 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.417075 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.432790 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:08 crc kubenswrapper[4842]: E0202 06:48:08.433157 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.434207 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:48:08 crc kubenswrapper[4842]: E0202 06:48:08.434496 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-njnbq_openshift-ovn-kubernetes(3f1e4f7c-d788-428b-bea6-e862234bfc59)\"" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.448163 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:38:19.399231974 +0000 UTC Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.520952 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521003 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521015 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521036 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.521050 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624459 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624530 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624548 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624605 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.624625 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727083 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727151 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727171 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727199 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.727255 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773779 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773858 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773883 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773913 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.773932 4842 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-02T06:48:08Z","lastTransitionTime":"2026-02-02T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.848022 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.847993827 podStartE2EDuration="1m33.847993827s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:05.865576084 +0000 UTC m=+111.242843996" watchObservedRunningTime="2026-02-02 06:48:08.847993827 +0000 UTC m=+114.225261769" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.848736 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv"] Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.849303 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.852019 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.852109 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.852741 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 06:48:08 crc kubenswrapper[4842]: I0202 06:48:08.853055 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033624 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033733 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.033765 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135549 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135772 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.135804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.136371 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.138101 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.139448 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.145567 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.170417 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vkztv\" (UID: \"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.432988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.432999 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:09 crc kubenswrapper[4842]: E0202 06:48:09.433185 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:09 crc kubenswrapper[4842]: E0202 06:48:09.433675 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.433155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:09 crc kubenswrapper[4842]: E0202 06:48:09.433979 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.448452 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:28:45.151910825 +0000 UTC Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.448594 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.460806 4842 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 02 06:48:09 crc kubenswrapper[4842]: I0202 06:48:09.464956 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" Feb 02 06:48:09 crc kubenswrapper[4842]: W0202 06:48:09.489328 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf0a1a0b_5f2b_47fe_ae63_8cc10e9ad69f.slice/crio-8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9 WatchSource:0}: Error finding container 8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9: Status 404 returned error can't find the container with id 8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9 Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.174118 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" event={"ID":"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f","Type":"ContainerStarted","Data":"aeed2cceffd144a699dd9d3912a8f1679c00e3ae944da369141c619a8adfe5f3"} Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.174271 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" event={"ID":"df0a1a0b-5f2b-47fe-ae63-8cc10e9ad69f","Type":"ContainerStarted","Data":"8252d758e791e8d1e59944e736717de930e65facdd6aaeca386f560a307180d9"} Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.199842 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vkztv" podStartSLOduration=94.19980909 podStartE2EDuration="1m34.19980909s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:10.197656638 +0000 UTC m=+115.574924620" watchObservedRunningTime="2026-02-02 06:48:10.19980909 +0000 UTC m=+115.577077032" Feb 02 06:48:10 crc kubenswrapper[4842]: I0202 06:48:10.433336 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:10 crc kubenswrapper[4842]: E0202 06:48:10.433580 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.180629 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181614 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/0.log" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181673 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" exitCode=1 Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerDied","Data":"eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d"} Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.181775 4842 scope.go:117] "RemoveContainer" containerID="8ab82214f87177d574853ea226061c99c11636ea31972aff1b9a4c3bad47752d" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.182418 4842 scope.go:117] "RemoveContainer" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.182685 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gmkx9_openshift-multus(c1fd21cd-ea6a-44a0-b136-f338fc97cf18)\"" pod="openshift-multus/multus-gmkx9" podUID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.433367 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.433527 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.433634 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.433909 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:11 crc kubenswrapper[4842]: I0202 06:48:11.434148 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:11 crc kubenswrapper[4842]: E0202 06:48:11.434505 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:12 crc kubenswrapper[4842]: I0202 06:48:12.188023 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:48:12 crc kubenswrapper[4842]: I0202 06:48:12.433347 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:12 crc kubenswrapper[4842]: E0202 06:48:12.433589 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:13 crc kubenswrapper[4842]: I0202 06:48:13.432550 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:13 crc kubenswrapper[4842]: I0202 06:48:13.432684 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:13 crc kubenswrapper[4842]: I0202 06:48:13.432747 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:13 crc kubenswrapper[4842]: E0202 06:48:13.435854 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:13 crc kubenswrapper[4842]: E0202 06:48:13.436089 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:13 crc kubenswrapper[4842]: E0202 06:48:13.436627 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:14 crc kubenswrapper[4842]: I0202 06:48:14.433337 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:14 crc kubenswrapper[4842]: E0202 06:48:14.433570 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:15 crc kubenswrapper[4842]: I0202 06:48:15.433083 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:15 crc kubenswrapper[4842]: I0202 06:48:15.433183 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:15 crc kubenswrapper[4842]: I0202 06:48:15.433264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.435657 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.436381 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.437213 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.457965 4842 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 02 06:48:15 crc kubenswrapper[4842]: E0202 06:48:15.572662 4842 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:48:16 crc kubenswrapper[4842]: I0202 06:48:16.432822 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:16 crc kubenswrapper[4842]: E0202 06:48:16.433068 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:17 crc kubenswrapper[4842]: I0202 06:48:17.433287 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:17 crc kubenswrapper[4842]: I0202 06:48:17.433316 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:17 crc kubenswrapper[4842]: E0202 06:48:17.433529 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:17 crc kubenswrapper[4842]: I0202 06:48:17.433316 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:17 crc kubenswrapper[4842]: E0202 06:48:17.433683 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:17 crc kubenswrapper[4842]: E0202 06:48:17.433823 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:18 crc kubenswrapper[4842]: I0202 06:48:18.433103 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:18 crc kubenswrapper[4842]: E0202 06:48:18.433417 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.433104 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.433139 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:19 crc kubenswrapper[4842]: E0202 06:48:19.433299 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.433364 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:19 crc kubenswrapper[4842]: E0202 06:48:19.434107 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:19 crc kubenswrapper[4842]: E0202 06:48:19.434249 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:19 crc kubenswrapper[4842]: I0202 06:48:19.434913 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.221179 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.223974 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerStarted","Data":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.224747 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.260579 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podStartSLOduration=105.260560575 podStartE2EDuration="1m45.260560575s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:20.259675903 +0000 UTC m=+125.636943835" watchObservedRunningTime="2026-02-02 06:48:20.260560575 +0000 UTC m=+125.637828487" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.433039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:20 crc kubenswrapper[4842]: E0202 06:48:20.433179 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:20 crc kubenswrapper[4842]: I0202 06:48:20.471477 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9chjr"] Feb 02 06:48:20 crc kubenswrapper[4842]: E0202 06:48:20.573839 4842 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.228987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.229709 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.433328 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.433346 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.433536 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:21 crc kubenswrapper[4842]: I0202 06:48:21.433346 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.433754 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:21 crc kubenswrapper[4842]: E0202 06:48:21.433899 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.432927 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433140 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.433194 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.433359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:23 crc kubenswrapper[4842]: I0202 06:48:23.433272 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433483 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433660 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:23 crc kubenswrapper[4842]: E0202 06:48:23.433853 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.432922 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.433052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.433090 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.434977 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:25 crc kubenswrapper[4842]: I0202 06:48:25.435125 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.435264 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.435129 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.435529 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:25 crc kubenswrapper[4842]: E0202 06:48:25.574825 4842 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:48:26 crc kubenswrapper[4842]: I0202 06:48:26.433167 4842 scope.go:117] "RemoveContainer" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.254511 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.254861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631"} Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.435704 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.435851 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.436060 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.436114 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.436322 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.436381 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:27 crc kubenswrapper[4842]: I0202 06:48:27.436514 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:27 crc kubenswrapper[4842]: E0202 06:48:27.436597 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432692 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432730 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432687 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:29 crc kubenswrapper[4842]: I0202 06:48:29.432840 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.432901 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9chjr" podUID="4f6c3b51-669c-4c7b-a23a-ed68d139849e" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.433045 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.433191 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 02 06:48:29 crc kubenswrapper[4842]: E0202 06:48:29.433280 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.432971 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.433058 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.434118 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.434308 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.435663 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.436107 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.437021 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.438160 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.438586 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 06:48:31 crc kubenswrapper[4842]: I0202 06:48:31.438963 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.596738 4842 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.662487 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjrtc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.663362 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.664683 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4rp8p"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.665695 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.666942 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.667710 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.668319 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.669075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.670881 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5dc9g"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.672132 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677088 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677121 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677256 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677316 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677394 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677406 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677476 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.677403 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.678414 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.679357 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.678817 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.679954 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.680736 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdspj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.681399 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.681420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.682594 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.683151 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.683686 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.685826 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.689779 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.690106 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.695280 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.701372 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.705080 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.705417 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.705783 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706206 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706366 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706544 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706592 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.706563 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.709665 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.710265 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.712018 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.712313 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.713328 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.713575 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.713799 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.714250 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.714514 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.714879 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.715137 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.715377 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.716367 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.717156 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.722138 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.722598 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.722760 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.723153 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.723404 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.723556 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.729941 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.730500 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.732239 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.732684 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.732827 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.733292 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.733906 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.758321 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.761316 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.761866 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.762107 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.762373 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.762852 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lh2qm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.763128 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.763263 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.763560 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.769004 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.769452 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.769825 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.770306 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.770520 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.770930 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.786681 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.786941 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.787121 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.787370 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.787761 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788263 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788322 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788622 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.788655 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.789246 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.790120 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.795259 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.796179 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.796700 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.796967 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797019 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797235 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797308 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.797398 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798468 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798579 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798720 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.798982 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.799941 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800036 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800114 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800174 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800334 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800537 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800646 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.800886 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808653 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-serving-cert\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808702 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-client\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808725 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808750 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-etcd-client\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808769 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-encryption-config\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808788 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808840 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76r2\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-kube-api-access-k76r2\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-config\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808890 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808926 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s6v4\" (UniqueName: \"kubernetes.io/projected/ceaf90b2-229c-4452-8a1b-fd016682bf6e-kube-api-access-7s6v4\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808950 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.808989 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809013 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809038 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpl2m\" (UniqueName: \"kubernetes.io/projected/e4367135-ecb4-447d-a89e-5dcbeffe345e-kube-api-access-mpl2m\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809056 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809077 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-trusted-ca\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809097 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809120 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809183 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809210 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809250 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shp4g\" (UniqueName: \"kubernetes.io/projected/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-kube-api-access-shp4g\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809269 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-node-pullsecrets\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809290 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809308 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9f6z\" (UniqueName: \"kubernetes.io/projected/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-kube-api-access-c9f6z\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809330 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmggs\" (UniqueName: \"kubernetes.io/projected/45dcaecb-f74e-4eaf-886a-28b6632f8d44-kube-api-access-xmggs\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809349 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-encryption-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809371 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809387 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809409 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809429 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809449 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809467 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-images\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit-dir\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809511 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809530 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809554 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97ps\" (UniqueName: \"kubernetes.io/projected/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-kube-api-access-z97ps\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809572 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtct\" (UniqueName: \"kubernetes.io/projected/10f8b640-1372-484f-b42f-97e336fb2992-kube-api-access-sgtct\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809597 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-config\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809621 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-image-import-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809642 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809659 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74549f13-263e-4e4f-8331-9f7fd6bf36b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809734 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45dcaecb-f74e-4eaf-886a-28b6632f8d44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809753 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809773 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809795 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-auth-proxy-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809834 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10f8b640-1372-484f-b42f-97e336fb2992-audit-dir\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809898 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809929 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-serving-cert\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809949 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809974 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4367135-ecb4-447d-a89e-5dcbeffe345e-machine-approver-tls\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.809993 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74549f13-263e-4e4f-8331-9f7fd6bf36b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810014 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznxk\" (UniqueName: \"kubernetes.io/projected/e08cb720-1a1d-47c3-a787-c61d377bf2dd-kube-api-access-sznxk\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810031 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810078 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-serving-cert\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810127 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810149 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.810263 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.816129 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-pbtq6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.817856 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.824486 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827461 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-config\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827535 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceaf90b2-229c-4452-8a1b-fd016682bf6e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827605 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaf90b2-229c-4452-8a1b-fd016682bf6e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-audit-policies\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827695 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827773 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08cb720-1a1d-47c3-a787-c61d377bf2dd-serving-cert\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827810 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.827935 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.828936 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829481 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5wqx2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829678 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.830088 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.828936 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.830201 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829375 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829443 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829503 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.829555 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.831389 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.832880 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.833620 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834292 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834494 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834675 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834764 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834863 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.834943 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835073 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835228 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835314 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.835431 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836001 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836120 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836483 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.836917 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.837500 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-j7bfz"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.837963 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.841270 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.842147 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.842519 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.842525 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.843242 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4rp8p"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.844337 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.846413 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.847378 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.849758 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.850113 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.851127 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.852024 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv9fc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.852120 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.852465 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.854362 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.854410 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.855141 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.855328 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.856523 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.857575 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.858260 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.858332 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.859149 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.859559 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h6pjl"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.860246 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.861323 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n42rc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.862016 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.862328 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.863009 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.863869 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.864602 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.866802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.867016 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lh2qm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.868708 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.891907 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.893513 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.894761 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m2mqz"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.895994 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.897055 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdspj"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.897417 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899294 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5dc9g"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899321 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjrtc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899332 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899346 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899355 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5wqx2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899365 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.899478 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.900023 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.900384 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.900666 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.901093 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.905415 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.909179 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.910445 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.924854 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.925316 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.927027 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929272 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929316 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929734 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z97ps\" (UniqueName: \"kubernetes.io/projected/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-kube-api-access-z97ps\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929765 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/27bce4a1-799c-4d40-900c-455eaba28398-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ee0e33-a160-4303-af00-0b145647f807-config\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929802 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ee0e33-a160-4303-af00-0b145647f807-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929819 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgtct\" (UniqueName: \"kubernetes.io/projected/10f8b640-1372-484f-b42f-97e336fb2992-kube-api-access-sgtct\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929839 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-config\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74549f13-263e-4e4f-8331-9f7fd6bf36b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929925 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929943 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-image-import-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929962 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.929987 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930013 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930036 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930070 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45dcaecb-f74e-4eaf-886a-28b6632f8d44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930090 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930113 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-metrics-tls\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930137 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnfv\" (UniqueName: \"kubernetes.io/projected/cc176201-02a2-46c0-903c-13943d989195-kube-api-access-wqnfv\") pod \"downloads-7954f5f757-pbtq6\" (UID: \"cc176201-02a2-46c0-903c-13943d989195\") " pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930167 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930185 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930201 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930255 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-auth-proxy-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930273 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-srv-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10f8b640-1372-484f-b42f-97e336fb2992-audit-dir\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930308 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930325 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930341 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-serving-cert\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930379 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930397 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d69d0f34-1e03-438d-9d97-de945aff185f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930415 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4367135-ecb4-447d-a89e-5dcbeffe345e-machine-approver-tls\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930430 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74549f13-263e-4e4f-8331-9f7fd6bf36b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930448 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznxk\" (UniqueName: \"kubernetes.io/projected/e08cb720-1a1d-47c3-a787-c61d377bf2dd-kube-api-access-sznxk\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930467 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-config\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930482 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930501 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930528 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-client\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930551 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhgm\" (UniqueName: \"kubernetes.io/projected/42ff05d2-dda3-411f-bcee-816f87ce21b8-kube-api-access-6zhgm\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930574 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-serving-cert\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930590 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930606 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930621 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930636 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930652 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-serving-cert\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930667 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-config\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930683 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceaf90b2-229c-4452-8a1b-fd016682bf6e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930709 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaf90b2-229c-4452-8a1b-fd016682bf6e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-audit-policies\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930748 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930765 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08cb720-1a1d-47c3-a787-c61d377bf2dd-serving-cert\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930780 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930798 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d0f34-1e03-438d-9d97-de945aff185f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930816 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42ff05d2-dda3-411f-bcee-816f87ce21b8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930427 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-serving-cert\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9vml\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-kube-api-access-g9vml\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930897 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-client\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930915 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-etcd-client\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930948 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-encryption-config\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930967 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d69d0f34-1e03-438d-9d97-de945aff185f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.930988 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931010 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931053 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76r2\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-kube-api-access-k76r2\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.931898 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.932055 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.932547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-auth-proxy-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.932700 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933332 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933572 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933621 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.933738 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-config\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.934074 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.934202 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-config\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.934780 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.937168 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-etcd-client\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.937855 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.940363 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pbtq6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.940414 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.940630 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-encryption-config\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.941657 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-serving-cert\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.941709 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942006 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-serving-cert\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942045 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942062 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942128 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-serving-cert\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942247 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-image-import-ca\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942612 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-config\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942638 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ee0e33-a160-4303-af00-0b145647f807-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942686 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942706 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-service-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942823 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10f8b640-1372-484f-b42f-97e336fb2992-audit-dir\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942873 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceaf90b2-229c-4452-8a1b-fd016682bf6e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.942886 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e4367135-ecb4-447d-a89e-5dcbeffe345e-machine-approver-tls\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943164 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s6v4\" (UniqueName: \"kubernetes.io/projected/ceaf90b2-229c-4452-8a1b-fd016682bf6e-kube-api-access-7s6v4\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943368 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943726 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74549f13-263e-4e4f-8331-9f7fd6bf36b3-trusted-ca\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.943728 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4367135-ecb4-447d-a89e-5dcbeffe345e-config\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944034 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n42rc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944126 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944155 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944183 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpl2m\" (UniqueName: \"kubernetes.io/projected/e4367135-ecb4-447d-a89e-5dcbeffe345e-kube-api-access-mpl2m\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944210 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w98wq\" (UniqueName: \"kubernetes.io/projected/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-kube-api-access-w98wq\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944256 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-trusted-ca\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944276 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw98m\" (UniqueName: \"kubernetes.io/projected/091908d5-acab-418a-a5f2-fa909294222a-kube-api-access-bw98m\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944341 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944386 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944410 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944456 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqzj\" (UniqueName: \"kubernetes.io/projected/57b85eac-df63-4c81-abe6-3dba293df9c2-kube-api-access-2kqzj\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944481 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944516 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944537 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57b85eac-df63-4c81-abe6-3dba293df9c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944559 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9f6z\" (UniqueName: \"kubernetes.io/projected/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-kube-api-access-c9f6z\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944578 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944629 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shp4g\" (UniqueName: \"kubernetes.io/projected/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-kube-api-access-shp4g\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944650 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-node-pullsecrets\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944674 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944699 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvqc\" (UniqueName: \"kubernetes.io/projected/fd96d668-a9b2-474f-8617-17eca5f01191-kube-api-access-xvvqc\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944725 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-config\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944723 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944779 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-profile-collector-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944802 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b85eac-df63-4c81-abe6-3dba293df9c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944830 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmggs\" (UniqueName: \"kubernetes.io/projected/45dcaecb-f74e-4eaf-886a-28b6632f8d44-kube-api-access-xmggs\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944864 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-encryption-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944887 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944946 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944973 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dnd7\" (UniqueName: \"kubernetes.io/projected/27bce4a1-799c-4d40-900c-455eaba28398-kube-api-access-2dnd7\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944997 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945015 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-images\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945032 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit-dir\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945051 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945070 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945229 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e08cb720-1a1d-47c3-a787-c61d377bf2dd-trusted-ca\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945249 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945491 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.944382 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945652 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945493 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.945974 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.946023 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-audit-dir\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.946037 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-service-ca-bundle\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.947408 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-node-pullsecrets\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.947651 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.947762 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv9fc"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.949409 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950271 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950737 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/10f8b640-1372-484f-b42f-97e336fb2992-audit-policies\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45dcaecb-f74e-4eaf-886a-28b6632f8d44-images\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950891 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.950996 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951105 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.951896 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.953176 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.953237 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h6pjl"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.954227 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.955321 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fhk9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.956636 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/10f8b640-1372-484f-b42f-97e336fb2992-etcd-client\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.956688 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.956795 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.958202 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kb6j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.958707 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.958776 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.960764 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.961986 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kb6j9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.962073 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.963432 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fhk9"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.964867 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z2sjd"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.965303 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.965933 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z2sjd"] Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.965953 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74549f13-263e-4e4f-8331-9f7fd6bf36b3-metrics-tls\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.966050 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.966377 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967049 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967343 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/45dcaecb-f74e-4eaf-886a-28b6632f8d44-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967374 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967445 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967881 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e08cb720-1a1d-47c3-a787-c61d377bf2dd-serving-cert\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.967910 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceaf90b2-229c-4452-8a1b-fd016682bf6e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.968147 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.969972 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-encryption-config\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:39 crc kubenswrapper[4842]: I0202 06:48:39.985765 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.011296 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.025650 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045579 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045665 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d69d0f34-1e03-438d-9d97-de945aff185f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-config\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045722 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045748 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-client\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045768 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zhgm\" (UniqueName: \"kubernetes.io/projected/42ff05d2-dda3-411f-bcee-816f87ce21b8-kube-api-access-6zhgm\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045793 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045812 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-serving-cert\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045832 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d0f34-1e03-438d-9d97-de945aff185f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045849 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42ff05d2-dda3-411f-bcee-816f87ce21b8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045867 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9vml\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-kube-api-access-g9vml\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045886 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d69d0f34-1e03-438d-9d97-de945aff185f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ee0e33-a160-4303-af00-0b145647f807-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045983 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.045998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-service-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046031 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w98wq\" (UniqueName: \"kubernetes.io/projected/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-kube-api-access-w98wq\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046048 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw98m\" (UniqueName: \"kubernetes.io/projected/091908d5-acab-418a-a5f2-fa909294222a-kube-api-access-bw98m\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046080 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046110 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57b85eac-df63-4c81-abe6-3dba293df9c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046131 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqzj\" (UniqueName: \"kubernetes.io/projected/57b85eac-df63-4c81-abe6-3dba293df9c2-kube-api-access-2kqzj\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046148 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046194 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvqc\" (UniqueName: \"kubernetes.io/projected/fd96d668-a9b2-474f-8617-17eca5f01191-kube-api-access-xvvqc\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046211 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046248 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-profile-collector-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046265 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b85eac-df63-4c81-abe6-3dba293df9c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dnd7\" (UniqueName: \"kubernetes.io/projected/27bce4a1-799c-4d40-900c-455eaba28398-kube-api-access-2dnd7\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046310 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ee0e33-a160-4303-af00-0b145647f807-config\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046327 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ee0e33-a160-4303-af00-0b145647f807-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046351 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/27bce4a1-799c-4d40-900c-455eaba28398-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046388 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046406 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046428 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046458 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-metrics-tls\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046476 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnfv\" (UniqueName: \"kubernetes.io/projected/cc176201-02a2-46c0-903c-13943d989195-kube-api-access-wqnfv\") pod \"downloads-7954f5f757-pbtq6\" (UID: \"cc176201-02a2-46c0-903c-13943d989195\") " pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046495 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046512 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-srv-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046527 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046698 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-config\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.046952 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-service-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.047393 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42ff05d2-dda3-411f-bcee-816f87ce21b8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.047788 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048081 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57b85eac-df63-4c81-abe6-3dba293df9c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-ca\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.048776 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049427 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-serving-cert\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049807 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.049909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-metrics-tls\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.050749 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57b85eac-df63-4c81-abe6-3dba293df9c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.051856 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fd96d668-a9b2-474f-8617-17eca5f01191-etcd-client\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.065858 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.071122 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.086075 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.125063 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.151633 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.165599 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.186038 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.206382 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.225762 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.246464 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.278560 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.286017 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.288726 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.305977 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.326629 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.345995 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.366576 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.386408 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.406037 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.426145 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.434430 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.446520 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.471532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.486128 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.506397 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.520834 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2ee0e33-a160-4303-af00-0b145647f807-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.525688 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.529591 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2ee0e33-a160-4303-af00-0b145647f807-config\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.545835 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.566051 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.570912 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d69d0f34-1e03-438d-9d97-de945aff185f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.585669 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.606298 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.617420 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d69d0f34-1e03-438d-9d97-de945aff185f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.626030 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.646175 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.667710 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.673363 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-srv-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.687657 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.703043 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.703202 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/091908d5-acab-418a-a5f2-fa909294222a-profile-collector-cert\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.706486 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.726258 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.747025 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.766443 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.786926 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.806898 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.826502 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.846307 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.863986 4842 request.go:700] Waited for 1.005459468s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.866082 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.887006 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.918326 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.927059 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.946256 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.966689 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 06:48:40 crc kubenswrapper[4842]: I0202 06:48:40.985738 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.006568 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.012476 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/27bce4a1-799c-4d40-900c-455eaba28398-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.027466 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.046897 4842 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.047054 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume podName:5b43b464-5623-46bb-8097-65b505d08960 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:41.547014861 +0000 UTC m=+146.924282813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume") pod "collect-profiles-29500245-vpjnw" (UID: "5b43b464-5623-46bb-8097-65b505d08960") : failed to sync configmap cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.046905 4842 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: E0202 06:48:41.047268 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls podName:42ff05d2-dda3-411f-bcee-816f87ce21b8 nodeName:}" failed. No retries permitted until 2026-02-02 06:48:41.547194265 +0000 UTC m=+146.924462217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls") pod "machine-config-controller-84d6567774-nz65j" (UID: "42ff05d2-dda3-411f-bcee-816f87ce21b8") : failed to sync secret cache: timed out waiting for the condition Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.047705 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.104638 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.104740 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.106120 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.126346 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.145560 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.165780 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.186689 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.225674 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.246134 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.265760 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.285875 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.307830 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.326993 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.346589 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.366744 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.385984 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.405979 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.426779 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.457587 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.465849 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.486738 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.506155 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.526625 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.579517 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznxk\" (UniqueName: \"kubernetes.io/projected/e08cb720-1a1d-47c3-a787-c61d377bf2dd-kube-api-access-sznxk\") pod \"console-operator-58897d9998-4rp8p\" (UID: \"e08cb720-1a1d-47c3-a787-c61d377bf2dd\") " pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.595900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z97ps\" (UniqueName: \"kubernetes.io/projected/7c4df1b8-c014-42db-ab26-6ac05f72c8ba-kube-api-access-z97ps\") pod \"openshift-controller-manager-operator-756b6f6bc6-cd8zk\" (UID: \"7c4df1b8-c014-42db-ab26-6ac05f72c8ba\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.606198 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgtct\" (UniqueName: \"kubernetes.io/projected/10f8b640-1372-484f-b42f-97e336fb2992-kube-api-access-sgtct\") pod \"apiserver-7bbb656c7d-jplm6\" (UID: \"10f8b640-1372-484f-b42f-97e336fb2992\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.611011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.611092 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.611759 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.616809 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42ff05d2-dda3-411f-bcee-816f87ce21b8-proxy-tls\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.629061 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76r2\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-kube-api-access-k76r2\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.638774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"controller-manager-879f6c89f-rssw5\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.642463 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.675049 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"oauth-openshift-558db77b4-hj5sv\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.675346 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.678743 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s6v4\" (UniqueName: \"kubernetes.io/projected/ceaf90b2-229c-4452-8a1b-fd016682bf6e-kube-api-access-7s6v4\") pod \"openshift-apiserver-operator-796bbdcf4f-kmxhp\" (UID: \"ceaf90b2-229c-4452-8a1b-fd016682bf6e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.700571 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74549f13-263e-4e4f-8331-9f7fd6bf36b3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-99kbj\" (UID: \"74549f13-263e-4e4f-8331-9f7fd6bf36b3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.725414 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.731845 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9f6z\" (UniqueName: \"kubernetes.io/projected/d8b4ca95-d26b-4f03-b095-b5096b6c3fbe-kube-api-access-c9f6z\") pod \"apiserver-76f77b778f-5dc9g\" (UID: \"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe\") " pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.756404 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shp4g\" (UniqueName: \"kubernetes.io/projected/5aa0cd7d-de34-4c00-8eb2-40e35e430b5d-kube-api-access-shp4g\") pod \"authentication-operator-69f744f599-wjrtc\" (UID: \"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.771054 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpl2m\" (UniqueName: \"kubernetes.io/projected/e4367135-ecb4-447d-a89e-5dcbeffe345e-kube-api-access-mpl2m\") pod \"machine-approver-56656f9798-9xwbf\" (UID: \"e4367135-ecb4-447d-a89e-5dcbeffe345e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.789769 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"route-controller-manager-6576b87f9c-brh4m\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.790159 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.806944 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.809208 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.809610 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmggs\" (UniqueName: \"kubernetes.io/projected/45dcaecb-f74e-4eaf-886a-28b6632f8d44-kube-api-access-xmggs\") pod \"machine-api-operator-5694c8668f-qdspj\" (UID: \"45dcaecb-f74e-4eaf-886a-28b6632f8d44\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.826552 4842 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.846381 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.864727 4842 request.go:700] Waited for 1.905735192s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.866268 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.866387 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.881589 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.886068 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.892389 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.901687 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.905674 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.926293 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.948362 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.951823 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.963151 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.969711 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.971576 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.986867 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 06:48:41 crc kubenswrapper[4842]: I0202 06:48:41.990376 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj"] Feb 02 06:48:41 crc kubenswrapper[4842]: W0202 06:48:41.994687 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4367135_ecb4_447d_a89e_5dcbeffe345e.slice/crio-8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096 WatchSource:0}: Error finding container 8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096: Status 404 returned error can't find the container with id 8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096 Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.010787 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.029440 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9vml\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-kube-api-access-g9vml\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.038599 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zhgm\" (UniqueName: \"kubernetes.io/projected/42ff05d2-dda3-411f-bcee-816f87ce21b8-kube-api-access-6zhgm\") pod \"machine-config-controller-84d6567774-nz65j\" (UID: \"42ff05d2-dda3-411f-bcee-816f87ce21b8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.049575 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10f8b640_1372_484f_b42f_97e336fb2992.slice/crio-b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a WatchSource:0}: Error finding container b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a: Status 404 returned error can't find the container with id b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.060396 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5dc9g"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.061270 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa1b5822-c8a6-4fdb-b42f-8a94469a65ef-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-r45fr\" (UID: \"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.084521 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d69d0f34-1e03-438d-9d97-de945aff185f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hbn7m\" (UID: \"d69d0f34-1e03-438d-9d97-de945aff185f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.103182 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqzj\" (UniqueName: \"kubernetes.io/projected/57b85eac-df63-4c81-abe6-3dba293df9c2-kube-api-access-2kqzj\") pod \"openshift-config-operator-7777fb866f-2mfc5\" (UID: \"57b85eac-df63-4c81-abe6-3dba293df9c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.117131 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.121000 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w98wq\" (UniqueName: \"kubernetes.io/projected/bf3383aa-e821-4389-b2f0-cc697ad4cc7a-kube-api-access-w98wq\") pod \"dns-operator-744455d44c-5wqx2\" (UID: \"bf3383aa-e821-4389-b2f0-cc697ad4cc7a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.137468 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.140523 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw98m\" (UniqueName: \"kubernetes.io/projected/091908d5-acab-418a-a5f2-fa909294222a-kube-api-access-bw98m\") pod \"catalog-operator-68c6474976-j9jgh\" (UID: \"091908d5-acab-418a-a5f2-fa909294222a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.147241 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.147283 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.152409 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c4df1b8_c014_42db_ab26_6ac05f72c8ba.slice/crio-1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503 WatchSource:0}: Error finding container 1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503: Status 404 returned error can't find the container with id 1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503 Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.165570 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"collect-profiles-29500245-vpjnw\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.172753 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4rp8p"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.175569 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.187761 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.188536 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvqc\" (UniqueName: \"kubernetes.io/projected/fd96d668-a9b2-474f-8617-17eca5f01191-kube-api-access-xvvqc\") pod \"etcd-operator-b45778765-lh2qm\" (UID: \"fd96d668-a9b2-474f-8617-17eca5f01191\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.197142 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode08cb720_1a1d_47c3_a787_c61d377bf2dd.slice/crio-1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc WatchSource:0}: Error finding container 1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc: Status 404 returned error can't find the container with id 1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.199103 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-qdspj"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.206674 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"console-f9d7485db-kmw8f\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.222634 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ee0e33-a160-4303-af00-0b145647f807-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ck7h4\" (UID: \"f2ee0e33-a160-4303-af00-0b145647f807\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.248801 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45dcaecb_f74e_4eaf_886a_28b6632f8d44.slice/crio-c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c WatchSource:0}: Error finding container c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c: Status 404 returned error can't find the container with id c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.251070 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.256741 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.258982 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dnd7\" (UniqueName: \"kubernetes.io/projected/27bce4a1-799c-4d40-900c-455eaba28398-kube-api-access-2dnd7\") pod \"multus-admission-controller-857f4d67dd-h6pjl\" (UID: \"27bce4a1-799c-4d40-900c-455eaba28398\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.265936 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.270578 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnfv\" (UniqueName: \"kubernetes.io/projected/cc176201-02a2-46c0-903c-13943d989195-kube-api-access-wqnfv\") pod \"downloads-7954f5f757-pbtq6\" (UID: \"cc176201-02a2-46c0-903c-13943d989195\") " pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.284082 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.310418 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-wjrtc"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.333839 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.340698 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341291 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4snk\" (UniqueName: \"kubernetes.io/projected/bc8e3a2f-b630-40bf-865e-c7a035385730-kube-api-access-z4snk\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e95addab-99c5-499c-92bc-f13fd4870710-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341341 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341369 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341403 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341422 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrmkg\" (UniqueName: \"kubernetes.io/projected/e95addab-99c5-499c-92bc-f13fd4870710-kube-api-access-qrmkg\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341450 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-key\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341467 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-stats-auth\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341484 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99922ba3-dd03-4c94-9663-9c530f7b3ad0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341500 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341527 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8e3a2f-b630-40bf-865e-c7a035385730-serving-cert\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341546 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58h4c\" (UniqueName: \"kubernetes.io/projected/99922ba3-dd03-4c94-9663-9c530f7b3ad0-kube-api-access-58h4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.341908 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:42.841894672 +0000 UTC m=+148.219162584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.341581 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gtlm\" (UniqueName: \"kubernetes.io/projected/23594203-b17a-4d98-95da-a7c0e3a2ef4e-kube-api-access-7gtlm\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343318 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343358 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-config\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343382 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8e3a2f-b630-40bf-865e-c7a035385730-config\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343404 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krdtw\" (UniqueName: \"kubernetes.io/projected/29629b99-9606-4830-9623-8c81cecbd0a9-kube-api-access-krdtw\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343446 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-metrics-certs\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343461 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343475 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-default-certificate\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343509 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23594203-b17a-4d98-95da-a7c0e3a2ef4e-service-ca-bundle\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343527 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343644 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343666 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.343905 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-cabundle\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344123 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/29629b99-9606-4830-9623-8c81cecbd0a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344150 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnwc\" (UniqueName: \"kubernetes.io/projected/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-kube-api-access-wdnwc\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344251 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.344358 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.372413 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerStarted","Data":"44ebd0c802db6062893241169e4706979097a692764a061e2fde6a02c71197ca"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.373390 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerStarted","Data":"1810ccd323bfce1d8d33adb40473a3ade0c6cc4b2982aa8de512e861bebf9e9f"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.374644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" event={"ID":"74549f13-263e-4e4f-8331-9f7fd6bf36b3","Type":"ContainerStarted","Data":"3785bb331ff60311311b350a2e5064a83ff8c02ccc368737bd311989b3d76b5b"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.374697 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" event={"ID":"74549f13-263e-4e4f-8331-9f7fd6bf36b3","Type":"ContainerStarted","Data":"4784625b0f83e3fea7414409f770b17f45d8471eb978a04de27ca3b0b1a07a11"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.375691 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" event={"ID":"e08cb720-1a1d-47c3-a787-c61d377bf2dd","Type":"ContainerStarted","Data":"1ac6d4b77f93f638344c658e47ff5f6af1d09ed7235d2813e462e9c82adf25dc"} Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.376735 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aa0cd7d_de34_4c00_8eb2_40e35e430b5d.slice/crio-12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6 WatchSource:0}: Error finding container 12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6: Status 404 returned error can't find the container with id 12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6 Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.377350 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" event={"ID":"10f8b640-1372-484f-b42f-97e336fb2992","Type":"ContainerStarted","Data":"b57bbd2d09ffe27c90b7d58b6c0369ba5185d430aaf719dc0615bea6aa6af56a"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.379075 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" event={"ID":"7c4df1b8-c014-42db-ab26-6ac05f72c8ba","Type":"ContainerStarted","Data":"1fc683608d36d02d3c937e8c1674591c83ce13f8c73baa4d09c561910ea81503"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.379987 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" event={"ID":"e4367135-ecb4-447d-a89e-5dcbeffe345e","Type":"ContainerStarted","Data":"8f87e94e972949701dd8325de9ff009c37cf799b868dbf647e6ca97c08949096"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.381077 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" event={"ID":"45dcaecb-f74e-4eaf-886a-28b6632f8d44","Type":"ContainerStarted","Data":"c164c2fb6171110ceac1a578d427c26e9602d9a750688fb911191639197ea84c"} Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.394430 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.413942 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.414652 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.415125 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.436160 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.437313 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.445572 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.445706 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.447788 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:42.947764361 +0000 UTC m=+148.325032273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.447811 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c976fbc-6a91-494d-8d9e-1abe8119acf9-config-volume\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.447849 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tm72\" (UniqueName: \"kubernetes.io/projected/a8cad1e4-b070-477e-a20a-5cf8cb397e85-kube-api-access-6tm72\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448203 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbqhp\" (UniqueName: \"kubernetes.io/projected/3c976fbc-6a91-494d-8d9e-1abe8119acf9-kube-api-access-pbqhp\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448248 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966b8965-4dbb-4735-9564-eac0652fa990-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448300 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gtlm\" (UniqueName: \"kubernetes.io/projected/23594203-b17a-4d98-95da-a7c0e3a2ef4e-kube-api-access-7gtlm\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448342 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448359 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-plugins-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-config\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448432 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8e3a2f-b630-40bf-865e-c7a035385730-config\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448476 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krdtw\" (UniqueName: \"kubernetes.io/projected/29629b99-9606-4830-9623-8c81cecbd0a9-kube-api-access-krdtw\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448517 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-metrics-certs\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448550 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-mountpoint-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448576 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448593 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-default-certificate\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448609 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23594203-b17a-4d98-95da-a7c0e3a2ef4e-service-ca-bundle\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448625 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448661 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448678 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns5sc\" (UniqueName: \"kubernetes.io/projected/6d58ee7c-c176-4ddd-af48-d9406f4eac74-kube-api-access-ns5sc\") pod \"migrator-59844c95c7-kgv82\" (UID: \"6d58ee7c-c176-4ddd-af48-d9406f4eac74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448692 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86a554b4-30b1-4521-8677-d1974308a379-cert\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448732 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966b8965-4dbb-4735-9564-eac0652fa990-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448762 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448777 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwrq2\" (UniqueName: \"kubernetes.io/projected/966b8965-4dbb-4735-9564-eac0652fa990-kube-api-access-cwrq2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448795 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/90441cdf-d9ad-48d8-a400-9c770bc81a60-kube-api-access-q996c\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448809 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448823 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-tmpfs\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448857 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-cabundle\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448923 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/29629b99-9606-4830-9623-8c81cecbd0a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448940 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-srv-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448957 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnwc\" (UniqueName: \"kubernetes.io/projected/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-kube-api-access-wdnwc\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.448972 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-webhook-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-socket-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2slks\" (UniqueName: \"kubernetes.io/projected/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-kube-api-access-2slks\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449059 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.449078 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450068 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-config\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450260 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-certs\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450285 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-csi-data-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450340 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450374 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4snk\" (UniqueName: \"kubernetes.io/projected/bc8e3a2f-b630-40bf-865e-c7a035385730-kube-api-access-z4snk\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450408 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e95addab-99c5-499c-92bc-f13fd4870710-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450429 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450569 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450592 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450631 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450649 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-node-bootstrap-token\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450665 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-registration-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450730 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450747 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz7xq\" (UniqueName: \"kubernetes.io/projected/14030278-3de4-4425-8308-813d4f7c0a2d-kube-api-access-tz7xq\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450777 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrmkg\" (UniqueName: \"kubernetes.io/projected/e95addab-99c5-499c-92bc-f13fd4870710-kube-api-access-qrmkg\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450811 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-key\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450840 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-images\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9mp\" (UniqueName: \"kubernetes.io/projected/86a554b4-30b1-4521-8677-d1974308a379-kube-api-access-cf9mp\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450912 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-cabundle\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450967 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-stats-auth\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.450988 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99922ba3-dd03-4c94-9663-9c530f7b3ad0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451008 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msm6c\" (UniqueName: \"kubernetes.io/projected/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-kube-api-access-msm6c\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451025 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451064 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8e3a2f-b630-40bf-865e-c7a035385730-serving-cert\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451100 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c976fbc-6a91-494d-8d9e-1abe8119acf9-metrics-tls\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451118 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58h4c\" (UniqueName: \"kubernetes.io/projected/99922ba3-dd03-4c94-9663-9c530f7b3ad0-kube-api-access-58h4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.451135 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8cad1e4-b070-477e-a20a-5cf8cb397e85-proxy-tls\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.451161 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:42.951150693 +0000 UTC m=+148.328418595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.455526 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc8e3a2f-b630-40bf-865e-c7a035385730-config\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.455552 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.456801 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.457525 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23594203-b17a-4d98-95da-a7c0e3a2ef4e-service-ca-bundle\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.457803 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.458059 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/29629b99-9606-4830-9623-8c81cecbd0a9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.459067 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.459166 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e95addab-99c5-499c-92bc-f13fd4870710-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.459999 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc8e3a2f-b630-40bf-865e-c7a035385730-serving-cert\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.460095 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.460508 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceaf90b2_229c_4452_8a1b_fd016682bf6e.slice/crio-13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a WatchSource:0}: Error finding container 13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a: Status 404 returned error can't find the container with id 13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.462775 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-signing-key\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.463425 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.463774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-metrics-certs\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.464447 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.465099 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-stats-auth\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.466254 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.467206 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/23594203-b17a-4d98-95da-a7c0e3a2ef4e-default-certificate\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.480846 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/99922ba3-dd03-4c94-9663-9c530f7b3ad0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.495077 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.509069 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gtlm\" (UniqueName: \"kubernetes.io/projected/23594203-b17a-4d98-95da-a7c0e3a2ef4e-kube-api-access-7gtlm\") pod \"router-default-5444994796-j7bfz\" (UID: \"23594203-b17a-4d98-95da-a7c0e3a2ef4e\") " pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.521327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krdtw\" (UniqueName: \"kubernetes.io/projected/29629b99-9606-4830-9623-8c81cecbd0a9-kube-api-access-krdtw\") pod \"package-server-manager-789f6589d5-wv68j\" (UID: \"29629b99-9606-4830-9623-8c81cecbd0a9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.538629 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.544804 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"marketplace-operator-79b997595-bzsxn\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.549446 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551544 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.551750 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.051718044 +0000 UTC m=+148.428985946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551789 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551824 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz7xq\" (UniqueName: \"kubernetes.io/projected/14030278-3de4-4425-8308-813d4f7c0a2d-kube-api-access-tz7xq\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-images\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9mp\" (UniqueName: \"kubernetes.io/projected/86a554b4-30b1-4521-8677-d1974308a379-kube-api-access-cf9mp\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551907 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msm6c\" (UniqueName: \"kubernetes.io/projected/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-kube-api-access-msm6c\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551930 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c976fbc-6a91-494d-8d9e-1abe8119acf9-metrics-tls\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8cad1e4-b070-477e-a20a-5cf8cb397e85-proxy-tls\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551976 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c976fbc-6a91-494d-8d9e-1abe8119acf9-config-volume\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.551992 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tm72\" (UniqueName: \"kubernetes.io/projected/a8cad1e4-b070-477e-a20a-5cf8cb397e85-kube-api-access-6tm72\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552010 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbqhp\" (UniqueName: \"kubernetes.io/projected/3c976fbc-6a91-494d-8d9e-1abe8119acf9-kube-api-access-pbqhp\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552037 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966b8965-4dbb-4735-9564-eac0652fa990-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552061 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-plugins-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552881 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-mountpoint-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552924 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552946 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86a554b4-30b1-4521-8677-d1974308a379-cert\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.552966 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns5sc\" (UniqueName: \"kubernetes.io/projected/6d58ee7c-c176-4ddd-af48-d9406f4eac74-kube-api-access-ns5sc\") pod \"migrator-59844c95c7-kgv82\" (UID: \"6d58ee7c-c176-4ddd-af48-d9406f4eac74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553025 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966b8965-4dbb-4735-9564-eac0652fa990-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553045 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwrq2\" (UniqueName: \"kubernetes.io/projected/966b8965-4dbb-4735-9564-eac0652fa990-kube-api-access-cwrq2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553081 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/90441cdf-d9ad-48d8-a400-9c770bc81a60-kube-api-access-q996c\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553100 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-tmpfs\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553126 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-srv-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-webhook-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553193 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-socket-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553211 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2slks\" (UniqueName: \"kubernetes.io/projected/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-kube-api-access-2slks\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553255 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-certs\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553269 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-csi-data-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553324 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553347 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-registration-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553408 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-node-bootstrap-token\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553527 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c976fbc-6a91-494d-8d9e-1abe8119acf9-config-volume\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.553860 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-images\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.554634 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.054608114 +0000 UTC m=+148.431876026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.554938 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-tmpfs\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.555072 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3c976fbc-6a91-494d-8d9e-1abe8119acf9-metrics-tls\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.555714 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966b8965-4dbb-4735-9564-eac0652fa990-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.556534 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8cad1e4-b070-477e-a20a-5cf8cb397e85-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.556603 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-csi-data-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557628 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557685 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-plugins-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557714 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-mountpoint-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557759 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-socket-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.557898 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/90441cdf-d9ad-48d8-a400-9c770bc81a60-registration-dir\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.558720 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8cad1e4-b070-477e-a20a-5cf8cb397e85-proxy-tls\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.564011 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-node-bootstrap-token\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.564758 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14030278-3de4-4425-8308-813d4f7c0a2d-certs\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.572813 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966b8965-4dbb-4735-9564-eac0652fa990-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.572903 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.572921 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-srv-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.573162 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86a554b4-30b1-4521-8677-d1974308a379-cert\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.573390 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-webhook-cert\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.573412 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-profile-collector-cert\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.587206 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrmkg\" (UniqueName: \"kubernetes.io/projected/e95addab-99c5-499c-92bc-f13fd4870710-kube-api-access-qrmkg\") pod \"cluster-samples-operator-665b6dd947-n9v5x\" (UID: \"e95addab-99c5-499c-92bc-f13fd4870710\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.598129 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4snk\" (UniqueName: \"kubernetes.io/projected/bc8e3a2f-b630-40bf-865e-c7a035385730-kube-api-access-z4snk\") pod \"service-ca-operator-777779d784-n42rc\" (UID: \"bc8e3a2f-b630-40bf-865e-c7a035385730\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.619650 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lh2qm"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.624442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zn7j9\" (UID: \"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.641850 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.655876 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.656386 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.156366793 +0000 UTC m=+148.533634705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.663126 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnwc\" (UniqueName: \"kubernetes.io/projected/9f265e28-d9d2-43db-b43b-8f7d778b2fa5-kube-api-access-wdnwc\") pod \"service-ca-9c57cc56f-hv9fc\" (UID: \"9f265e28-d9d2-43db-b43b-8f7d778b2fa5\") " pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.667068 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58h4c\" (UniqueName: \"kubernetes.io/projected/99922ba3-dd03-4c94-9663-9c530f7b3ad0-kube-api-access-58h4c\") pod \"control-plane-machine-set-operator-78cbb6b69f-gnmkq\" (UID: \"99922ba3-dd03-4c94-9663-9c530f7b3ad0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.685958 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.721476 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.725479 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tm72\" (UniqueName: \"kubernetes.io/projected/a8cad1e4-b070-477e-a20a-5cf8cb397e85-kube-api-access-6tm72\") pod \"machine-config-operator-74547568cd-w66ps\" (UID: \"a8cad1e4-b070-477e-a20a-5cf8cb397e85\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.732615 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.752116 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbqhp\" (UniqueName: \"kubernetes.io/projected/3c976fbc-6a91-494d-8d9e-1abe8119acf9-kube-api-access-pbqhp\") pod \"dns-default-z2sjd\" (UID: \"3c976fbc-6a91-494d-8d9e-1abe8119acf9\") " pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.757484 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.757805 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.257791954 +0000 UTC m=+148.635059866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.766602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9mp\" (UniqueName: \"kubernetes.io/projected/86a554b4-30b1-4521-8677-d1974308a379-kube-api-access-cf9mp\") pod \"ingress-canary-kb6j9\" (UID: \"86a554b4-30b1-4521-8677-d1974308a379\") " pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.768037 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.788875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwrq2\" (UniqueName: \"kubernetes.io/projected/966b8965-4dbb-4735-9564-eac0652fa990-kube-api-access-cwrq2\") pod \"kube-storage-version-migrator-operator-b67b599dd-rx6hm\" (UID: \"966b8965-4dbb-4735-9564-eac0652fa990\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.791661 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.801053 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msm6c\" (UniqueName: \"kubernetes.io/projected/1b0e61a0-72dd-4edd-8217-c7b157e2c38c-kube-api-access-msm6c\") pod \"packageserver-d55dfcdfc-n6n4t\" (UID: \"1b0e61a0-72dd-4edd-8217-c7b157e2c38c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.803988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.811076 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.822829 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.830850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz7xq\" (UniqueName: \"kubernetes.io/projected/14030278-3de4-4425-8308-813d4f7c0a2d-kube-api-access-tz7xq\") pod \"machine-config-server-m2mqz\" (UID: \"14030278-3de4-4425-8308-813d4f7c0a2d\") " pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.840151 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2slks\" (UniqueName: \"kubernetes.io/projected/b7ceecfd-f2a9-4c82-85de-e32eb001eb2b-kube-api-access-2slks\") pod \"olm-operator-6b444d44fb-z8q7b\" (UID: \"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.851430 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.861004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.861324 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.361304676 +0000 UTC m=+148.738572588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.870996 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q996c\" (UniqueName: \"kubernetes.io/projected/90441cdf-d9ad-48d8-a400-9c770bc81a60-kube-api-access-q996c\") pod \"csi-hostpathplugin-6fhk9\" (UID: \"90441cdf-d9ad-48d8-a400-9c770bc81a60\") " pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.878178 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.878977 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.884277 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.886336 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5wqx2"] Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.888778 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns5sc\" (UniqueName: \"kubernetes.io/projected/6d58ee7c-c176-4ddd-af48-d9406f4eac74-kube-api-access-ns5sc\") pod \"migrator-59844c95c7-kgv82\" (UID: \"6d58ee7c-c176-4ddd-af48-d9406f4eac74\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.890119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.898490 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.906569 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m2mqz" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.913707 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.935940 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.946254 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kb6j9" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.949420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:42 crc kubenswrapper[4842]: I0202 06:48:42.962040 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:42 crc kubenswrapper[4842]: E0202 06:48:42.962558 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.462544832 +0000 UTC m=+148.839812744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.977913 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b43b464_5623_46bb_8097_65b505d08960.slice/crio-5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34 WatchSource:0}: Error finding container 5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34: Status 404 returned error can't find the container with id 5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34 Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.982063 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf3383aa_e821_4389_b2f0_cc697ad4cc7a.slice/crio-3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda WatchSource:0}: Error finding container 3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda: Status 404 returned error can't find the container with id 3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda Feb 02 06:48:42 crc kubenswrapper[4842]: W0202 06:48:42.984786 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ff05d2_dda3_411f_bcee_816f87ce21b8.slice/crio-ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85 WatchSource:0}: Error finding container ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85: Status 404 returned error can't find the container with id ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.063721 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.064252 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.56423308 +0000 UTC m=+148.941500992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.066566 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x"] Feb 02 06:48:43 crc kubenswrapper[4842]: W0202 06:48:43.100754 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23594203_b17a_4d98_95da_a7c0e3a2ef4e.slice/crio-7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee WatchSource:0}: Error finding container 7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee: Status 404 returned error can't find the container with id 7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.145502 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pbtq6"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.151350 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.152851 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.165372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.165665 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.665651781 +0000 UTC m=+149.042919693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.173359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.225715 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h6pjl"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.266988 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.267952 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.767933043 +0000 UTC m=+149.145200945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: W0202 06:48:43.277168 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57b85eac_df63_4c81_abe6_3dba293df9c2.slice/crio-3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521 WatchSource:0}: Error finding container 3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521: Status 404 returned error can't find the container with id 3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.377553 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.378436 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.378498 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.378540 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.378930 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.878917466 +0000 UTC m=+149.256185378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.384603 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.398271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.399954 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.407476 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" event={"ID":"42ff05d2-dda3-411f-bcee-816f87ce21b8","Type":"ContainerStarted","Data":"ac5d8c61f13048d2a60d58a9dde843ea4257a79e5de57b1cf689ae0265f1aa85"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.411559 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m2mqz" event={"ID":"14030278-3de4-4425-8308-813d4f7c0a2d","Type":"ContainerStarted","Data":"191b477ad776b94a2600969b4929206555e02f392f64c625a9e5dd238356e0ee"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.420123 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerStarted","Data":"64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.420184 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerStarted","Data":"643cd1b7543d0a40a6f2280aca5f3b03741bd2063f49a6310b7a1671fc67d3cc"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.421236 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.447998 4842 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-brh4m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.448075 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.478052 4842 generic.go:334] "Generic (PLEG): container finished" podID="10f8b640-1372-484f-b42f-97e336fb2992" containerID="33308fffc29e09c2809c8296fe5ed110a7c17807a90952ba788c9e21c7133299" exitCode=0 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.476958 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" event={"ID":"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef","Type":"ContainerStarted","Data":"70f7df960c8c15dc99df889941b319c6bdc1ecff906022dec5bf662487f58a4c"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.478990 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" event={"ID":"aa1b5822-c8a6-4fdb-b42f-8a94469a65ef","Type":"ContainerStarted","Data":"188f3a400d52af94f196b7bfd0f212fbf10bc7c314e43ab85fab6d2ed1708e8f"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.479187 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" event={"ID":"10f8b640-1372-484f-b42f-97e336fb2992","Type":"ContainerDied","Data":"33308fffc29e09c2809c8296fe5ed110a7c17807a90952ba788c9e21c7133299"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.480376 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.480634 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.482513 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:43.982483949 +0000 UTC m=+149.359751861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.495929 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.496770 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.500326 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.546735 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" event={"ID":"e08cb720-1a1d-47c3-a787-c61d377bf2dd","Type":"ContainerStarted","Data":"bdb1e584a03832c94aa5f1bf36e11d0a2a871b030797a8652337af4f9beecb08"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.546814 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.564135 4842 patch_prober.go:28] interesting pod/console-operator-58897d9998-4rp8p container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.564695 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" podUID="e08cb720-1a1d-47c3-a787-c61d377bf2dd" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.583260 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.585119 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.085102689 +0000 UTC m=+149.462370601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.623402 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" event={"ID":"e4367135-ecb4-447d-a89e-5dcbeffe345e","Type":"ContainerStarted","Data":"4a72f24d6a3cbccd529641d399febb8d89d65c4272b29b63f17cc77940c63603"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.668513 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" event={"ID":"e95addab-99c5-499c-92bc-f13fd4870710","Type":"ContainerStarted","Data":"b3003661d21f7ddeaa342d70fec0f1a595d7db0dd41d7d3b64338bb52034151e"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.711356 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.713196 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" event={"ID":"27bce4a1-799c-4d40-900c-455eaba28398","Type":"ContainerStarted","Data":"09e95bced85da80bd8ffd68f3301db9615973f0b26cbc28b817abe57671274ad"} Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.713660 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.213637718 +0000 UTC m=+149.590905630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.718785 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerStarted","Data":"3ab9a1e80b2e9f9bd86826c0a8c923659eb16b3b9cb43a3fe7c8fc4c09f48521"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.733176 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" event={"ID":"74549f13-263e-4e4f-8331-9f7fd6bf36b3","Type":"ContainerStarted","Data":"2b06ca9643e6dd66ea229dc73db41bbef76bafb5e58300e8bc881d1a7b0842f2"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.738964 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" event={"ID":"bf3383aa-e821-4389-b2f0-cc697ad4cc7a","Type":"ContainerStarted","Data":"3cddd3a52aafeec12f90233681b01486a47adbd0a6f4f02a873d81e9ec7c6cda"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.746001 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.751747 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.752134 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" event={"ID":"091908d5-acab-418a-a5f2-fa909294222a","Type":"ContainerStarted","Data":"65dc9362c6b26f739995b4de9917da7cb58d0cae90f7b95923ceefe53ac9c22f"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.752176 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" event={"ID":"091908d5-acab-418a-a5f2-fa909294222a","Type":"ContainerStarted","Data":"9ed29c80f17cb758d8b4ef130b3adcd8c80632dbec158c183db1e837dd9a47dc"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.755264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.761144 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.770087 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" event={"ID":"fd96d668-a9b2-474f-8617-17eca5f01191","Type":"ContainerStarted","Data":"bc0f60c5880e048d9b8d09aa27d50fdf78cd9c8eef2084028b57c06b7e7231e8"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.772437 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerStarted","Data":"f626d676ce0b2dbd85f858b166fb0050d475783a83143a42e19f369ae37353e6"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.779958 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerStarted","Data":"5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.802512 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" event={"ID":"d69d0f34-1e03-438d-9d97-de945aff185f","Type":"ContainerStarted","Data":"b0ffad2cd3c45f0a4e916abe4e0753f6e6d92ab58c59073999ac49730b021db9"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.809784 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" event={"ID":"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d","Type":"ContainerStarted","Data":"239aea454323fbca3eb7b074809688382235f97c4aaeec9ff2a95a2f210123bf"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.809821 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" event={"ID":"5aa0cd7d-de34-4c00-8eb2-40e35e430b5d","Type":"ContainerStarted","Data":"12fa26e22eeaf69b0062d177a21558837de011ed6da5184d7f1750e5b3ea0dd6"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812680 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812726 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerStarted","Data":"25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812772 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerStarted","Data":"9e442ed8624abf7c7c008be60f767ce4757519be014cdfd4e95fe98d8969b767"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.812936 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.813056 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.31304023 +0000 UTC m=+149.690308142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.816337 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pbtq6" event={"ID":"cc176201-02a2-46c0-903c-13943d989195","Type":"ContainerStarted","Data":"abb907cbbedc7828acfd06c8ee8bae680599c1c5999a4680cb0c9a6dee0b95ad"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.829291 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" event={"ID":"7c4df1b8-c014-42db-ab26-6ac05f72c8ba","Type":"ContainerStarted","Data":"2dc282acc934af0f4a041ef148e88a9e0d6a5040600b529a7e6e282fd12e43b2"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.836573 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t"] Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.838473 4842 generic.go:334] "Generic (PLEG): container finished" podID="d8b4ca95-d26b-4f03-b095-b5096b6c3fbe" containerID="b7385cd6372928f96bc72bbc29e57087705ce0ea17acf32f23a5328a7a0b2ec4" exitCode=0 Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.838521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerDied","Data":"b7385cd6372928f96bc72bbc29e57087705ce0ea17acf32f23a5328a7a0b2ec4"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.861370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" event={"ID":"45dcaecb-f74e-4eaf-886a-28b6632f8d44","Type":"ContainerStarted","Data":"fe44bedac52c769b93786c7124dc2a65a35448b9d0da00c8e6691fabf5fe1c67"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.867765 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerStarted","Data":"ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.868366 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.873279 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" event={"ID":"f2ee0e33-a160-4303-af00-0b145647f807","Type":"ContainerStarted","Data":"ccb5c4e8c7fd3c61220db19517da2bd7a1b1f1f9f5c81cb9219024caa0cd37d7"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.876164 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" event={"ID":"ceaf90b2-229c-4452-8a1b-fd016682bf6e","Type":"ContainerStarted","Data":"40a514a6aabc79b06fa62bd09dac8e951547078fe0891998d0bf0db2343e22b5"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.876190 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" event={"ID":"ceaf90b2-229c-4452-8a1b-fd016682bf6e","Type":"ContainerStarted","Data":"13efea9185082b7d981af116b6c37c2792ed02efaff8abdff2ee0e301c453f7a"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.880511 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-j7bfz" event={"ID":"23594203-b17a-4d98-95da-a7c0e3a2ef4e","Type":"ContainerStarted","Data":"7c436f2068159849a1430e912428ed6855ffb1465ddb0bf1ae175ec4fa9e6eee"} Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.906836 4842 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-j9jgh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.906891 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" podUID="091908d5-acab-418a-a5f2-fa909294222a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907512 4842 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rssw5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907539 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907594 4842 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hj5sv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" start-of-body= Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.907607 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.27:6443/healthz\": dial tcp 10.217.0.27:6443: connect: connection refused" Feb 02 06:48:43 crc kubenswrapper[4842]: I0202 06:48:43.914439 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:43 crc kubenswrapper[4842]: E0202 06:48:43.921007 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.42098744 +0000 UTC m=+149.798255352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.019103 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.023680 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.523657041 +0000 UTC m=+149.900924953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.121800 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.122179 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.622159881 +0000 UTC m=+149.999427783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.223155 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.223822 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.723809038 +0000 UTC m=+150.101076940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.315733 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" podStartSLOduration=128.315710518 podStartE2EDuration="2m8.315710518s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.314625941 +0000 UTC m=+149.691893853" watchObservedRunningTime="2026-02-02 06:48:44.315710518 +0000 UTC m=+149.692978430" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.326496 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.326944 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.8269251 +0000 UTC m=+150.204193012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.360902 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cd8zk" podStartSLOduration=128.360877674 podStartE2EDuration="2m8.360877674s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.353265639 +0000 UTC m=+149.730533551" watchObservedRunningTime="2026-02-02 06:48:44.360877674 +0000 UTC m=+149.738145586" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.405298 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n42rc"] Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.429467 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.429873 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:44.929859568 +0000 UTC m=+150.307127480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.440018 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-99kbj" podStartSLOduration=128.439995614 podStartE2EDuration="2m8.439995614s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.435687829 +0000 UTC m=+149.812955741" watchObservedRunningTime="2026-02-02 06:48:44.439995614 +0000 UTC m=+149.817263526" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.500458 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" podStartSLOduration=128.500429059 podStartE2EDuration="2m8.500429059s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.475043664 +0000 UTC m=+149.852311576" watchObservedRunningTime="2026-02-02 06:48:44.500429059 +0000 UTC m=+149.877696971" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.505394 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.514759 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" podStartSLOduration=128.514738456 podStartE2EDuration="2m8.514738456s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.508101465 +0000 UTC m=+149.885369377" watchObservedRunningTime="2026-02-02 06:48:44.514738456 +0000 UTC m=+149.892006368" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.531651 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.532019 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.031995245 +0000 UTC m=+150.409263157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.581160 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kmxhp" podStartSLOduration=128.581136767 podStartE2EDuration="2m8.581136767s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.580690877 +0000 UTC m=+149.957958799" watchObservedRunningTime="2026-02-02 06:48:44.581136767 +0000 UTC m=+149.958404679" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.581634 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-kmw8f" podStartSLOduration=128.581624909 podStartE2EDuration="2m8.581624909s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.554556392 +0000 UTC m=+149.931824314" watchObservedRunningTime="2026-02-02 06:48:44.581624909 +0000 UTC m=+149.958892821" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.616324 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-r45fr" podStartSLOduration=128.616300291 podStartE2EDuration="2m8.616300291s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.615997463 +0000 UTC m=+149.993265375" watchObservedRunningTime="2026-02-02 06:48:44.616300291 +0000 UTC m=+149.993568203" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.632878 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.633181 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.13316689 +0000 UTC m=+150.510434802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.657523 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" podStartSLOduration=128.657505161 podStartE2EDuration="2m8.657505161s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.655629855 +0000 UTC m=+150.032897767" watchObservedRunningTime="2026-02-02 06:48:44.657505161 +0000 UTC m=+150.034773063" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.733645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.733969 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.233950336 +0000 UTC m=+150.611218248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.741375 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podStartSLOduration=128.741356675 podStartE2EDuration="2m8.741356675s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.701782905 +0000 UTC m=+150.079050817" watchObservedRunningTime="2026-02-02 06:48:44.741356675 +0000 UTC m=+150.118624587" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.742986 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" podStartSLOduration=128.742979915 podStartE2EDuration="2m8.742979915s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.740758911 +0000 UTC m=+150.118026813" watchObservedRunningTime="2026-02-02 06:48:44.742979915 +0000 UTC m=+150.120247827" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.778824 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-wjrtc" podStartSLOduration=128.778803464 podStartE2EDuration="2m8.778803464s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.77739186 +0000 UTC m=+150.154659772" watchObservedRunningTime="2026-02-02 06:48:44.778803464 +0000 UTC m=+150.156071376" Feb 02 06:48:44 crc kubenswrapper[4842]: W0202 06:48:44.817255 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f753a1_ecf0_4b2c_9121_989677c6b2a6.slice/crio-86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac WatchSource:0}: Error finding container 86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac: Status 404 returned error can't find the container with id 86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.839281 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.839642 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.33962653 +0000 UTC m=+150.716894442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.840609 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podStartSLOduration=128.840589443 podStartE2EDuration="2m8.840589443s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.839932597 +0000 UTC m=+150.217200509" watchObservedRunningTime="2026-02-02 06:48:44.840589443 +0000 UTC m=+150.217857355" Feb 02 06:48:44 crc kubenswrapper[4842]: I0202 06:48:44.943953 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:44 crc kubenswrapper[4842]: E0202 06:48:44.944814 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.444789062 +0000 UTC m=+150.822056974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.045846 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.046197 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.546182192 +0000 UTC m=+150.923450104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.066757 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerStarted","Data":"86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.101948 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-qdspj" event={"ID":"45dcaecb-f74e-4eaf-886a-28b6632f8d44","Type":"ContainerStarted","Data":"9893cfb13791ff92b87735366f0d73281bc502ec8f5c46d7a77e471885879a8b"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.105056 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" podStartSLOduration=129.10504096 podStartE2EDuration="2m9.10504096s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:44.943383278 +0000 UTC m=+150.320651190" watchObservedRunningTime="2026-02-02 06:48:45.10504096 +0000 UTC m=+150.482308872" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.106169 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hv9fc"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.117643 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.147007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.147662 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.647641404 +0000 UTC m=+151.024909316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.147848 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" event={"ID":"99922ba3-dd03-4c94-9663-9c530f7b3ad0","Type":"ContainerStarted","Data":"e903d0c7179a7a8213973f57b1d8571980c1db1773ffcec965e4436bc5deecca"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.167769 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" event={"ID":"e95addab-99c5-499c-92bc-f13fd4870710","Type":"ContainerStarted","Data":"bf8c3f93461b4f45026c3dbcf69102b55e4905f119339a26fbba42d9239f2b9a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.174422 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" event={"ID":"bc8e3a2f-b630-40bf-865e-c7a035385730","Type":"ContainerStarted","Data":"633a96cf373218e4902f722440601e3e44ff539ab1dfcd396628e3317216b44a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.196107 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pbtq6" event={"ID":"cc176201-02a2-46c0-903c-13943d989195","Type":"ContainerStarted","Data":"11860d3d3dd36f702b7fbbac25a115db9fc5e69c5bae23b02fb07557a2fd8f8a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.197397 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.204558 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" event={"ID":"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8","Type":"ContainerStarted","Data":"4f2df937c73158110bca83af94c7ca1466f862d31bb5d5ac9f1c617ca204a0ca"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.215618 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-j7bfz" event={"ID":"23594203-b17a-4d98-95da-a7c0e3a2ef4e","Type":"ContainerStarted","Data":"6a57a2d4264cf37752ab3da69a10983b86385f84e5fe9c2db99830075f52413a"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.221145 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerStarted","Data":"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.230561 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.248614 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.248680 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.249550 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.250421 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.750398987 +0000 UTC m=+151.127666899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.258093 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerStarted","Data":"d2fd60f59fddc30897ba37779de20ce7fb25833d572dcc1de237b74148cf5af6"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.280575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" event={"ID":"42ff05d2-dda3-411f-bcee-816f87ce21b8","Type":"ContainerStarted","Data":"c38cc69394295f586172b4acf019bfc50c159ccc330982f39cf94e1fe9b27683"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.285183 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" event={"ID":"e4367135-ecb4-447d-a89e-5dcbeffe345e","Type":"ContainerStarted","Data":"93ec7525bee512d972c992202015e6a305802c186439d4e8975bf16153a14c8f"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.352669 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.354691 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.854668878 +0000 UTC m=+151.231936790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.356624 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-pbtq6" podStartSLOduration=129.356608325 podStartE2EDuration="2m9.356608325s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.338010753 +0000 UTC m=+150.715278675" watchObservedRunningTime="2026-02-02 06:48:45.356608325 +0000 UTC m=+150.733876237" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.380374 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z2sjd"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.384793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lh2qm" event={"ID":"fd96d668-a9b2-474f-8617-17eca5f01191","Type":"ContainerStarted","Data":"47646ec9237caa84032e8451e41a413bfcc66da7a9af859fc66fe722176c041e"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.400605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerStarted","Data":"ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.416756 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.421999 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hbn7m" event={"ID":"d69d0f34-1e03-438d-9d97-de945aff185f","Type":"ContainerStarted","Data":"effb92735b3e6afb10c9dc8774289f46b7283dacd07a69849dd78dcdb2d304b3"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.454180 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.455195 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:45.955173606 +0000 UTC m=+151.332441518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.475998 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" event={"ID":"1b0e61a0-72dd-4edd-8217-c7b157e2c38c","Type":"ContainerStarted","Data":"a05baceb8c51022ca5f91f8419a01be0cc4ac107e436d2ab4eb90aeddd510ff6"} Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.476046 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.479520 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6fhk9"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.479946 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-j7bfz" podStartSLOduration=129.479919387 podStartE2EDuration="2m9.479919387s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.475042878 +0000 UTC m=+150.852310790" watchObservedRunningTime="2026-02-02 06:48:45.479919387 +0000 UTC m=+150.857187299" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.489783 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.509694 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.517871 4842 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n6n4t container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.517932 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" podUID="1b0e61a0-72dd-4edd-8217-c7b157e2c38c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.530364 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-j9jgh" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.548195 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.566360 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.583415 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.083389348 +0000 UTC m=+151.460657260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.605386 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kb6j9"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.616831 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9xwbf" podStartSLOduration=130.616802658 podStartE2EDuration="2m10.616802658s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.60493069 +0000 UTC m=+150.982198602" watchObservedRunningTime="2026-02-02 06:48:45.616802658 +0000 UTC m=+150.994070570" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.654054 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.678741 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4rp8p" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.686983 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82"] Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.694336 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.694876 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.194860833 +0000 UTC m=+151.572128745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: W0202 06:48:45.707722 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8cad1e4_b070_477e_a20a_5cf8cb397e85.slice/crio-4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e WatchSource:0}: Error finding container 4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e: Status 404 returned error can't find the container with id 4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.736685 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" podStartSLOduration=129.736663737 podStartE2EDuration="2m9.736663737s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.690540108 +0000 UTC m=+151.067808020" watchObservedRunningTime="2026-02-02 06:48:45.736663737 +0000 UTC m=+151.113931649" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.739355 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.745575 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:45 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:45 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:45 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.745636 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.801517 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.801810 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.301791307 +0000 UTC m=+151.679059219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.822032 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" podStartSLOduration=129.822010008 podStartE2EDuration="2m9.822010008s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:45.820898711 +0000 UTC m=+151.198166623" watchObservedRunningTime="2026-02-02 06:48:45.822010008 +0000 UTC m=+151.199277920" Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.883370 4842 csr.go:261] certificate signing request csr-sclbq is approved, waiting to be issued Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.883690 4842 csr.go:257] certificate signing request csr-sclbq is issued Feb 02 06:48:45 crc kubenswrapper[4842]: I0202 06:48:45.904896 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:45 crc kubenswrapper[4842]: E0202 06:48:45.906022 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.406002906 +0000 UTC m=+151.783270818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.029630 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.031128 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.531098222 +0000 UTC m=+151.908366124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.131954 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.132397 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.632381319 +0000 UTC m=+152.009649231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.233010 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.233268 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.733243927 +0000 UTC m=+152.110511849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.233370 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.233830 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.733822291 +0000 UTC m=+152.111090203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.334812 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.335399 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.835371805 +0000 UTC m=+152.212639717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.335529 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.336267 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.836259696 +0000 UTC m=+152.213527598 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.438692 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.438920 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.938900487 +0000 UTC m=+152.316168389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.439059 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.439461 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:46.93944885 +0000 UTC m=+152.316716762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.449352 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.450396 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.457003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.457478 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541123 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541771 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.541807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.541918 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.041893766 +0000 UTC m=+152.419161678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.555803 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" event={"ID":"9f265e28-d9d2-43db-b43b-8f7d778b2fa5","Type":"ContainerStarted","Data":"15f42a690f24dad2e8e12cbd87e95b5de1963351a99e5f92a66c822bb93e2a42"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.555879 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" event={"ID":"9f265e28-d9d2-43db-b43b-8f7d778b2fa5","Type":"ContainerStarted","Data":"4237fb3fc2c0d0427905882c8ea87076a58d31ccd42f534017bd8f4a62869000"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.605226 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z2sjd" event={"ID":"3c976fbc-6a91-494d-8d9e-1abe8119acf9","Type":"ContainerStarted","Data":"a9698972c91998c77dcd7c672110e16872ab1ca222eab182dc3cdc9e1a6629e0"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645427 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645480 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645524 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.645577 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.647064 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.648872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.662298 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.162269677 +0000 UTC m=+152.539537589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.663335 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerStarted","Data":"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.665278 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.672772 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bzsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.672873 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.693902 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.697090 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" event={"ID":"966b8965-4dbb-4735-9564-eac0652fa990","Type":"ContainerStarted","Data":"49c8dab4096bc22d0214ccc074500be0de11bfa62de290221a3661baa279c956"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.697230 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" event={"ID":"966b8965-4dbb-4735-9564-eac0652fa990","Type":"ContainerStarted","Data":"5c6b84f9f11dcf696a0f508631b9436f8b8bd39ab4a0c268b86ed1e8f1857af6"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.697414 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.701121 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.726979 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.738839 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" event={"ID":"99922ba3-dd03-4c94-9663-9c530f7b3ad0","Type":"ContainerStarted","Data":"ded980eed0bdc6282da6593565d27076ddee0dc4971ef792b14482b8d4fdf695"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.740963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"certified-operators-74vp9\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.746883 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:46 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:46 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:46 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.747420 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.748117 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.756064 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.256037933 +0000 UTC m=+152.633305845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.804375 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"36431c07d80df4215cfbde2d713a5ce005a80527310444090090b9c5f928ad31"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.810915 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.816617 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.828133 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"508a7b34a0de2d6d36e3a3b6ffdac868bd6d1323451256fd8c1bfac6ac424442"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.828312 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.855862 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.855923 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.855947 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.856006 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.859049 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.359029482 +0000 UTC m=+152.736297394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.885396 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-02 06:43:45 +0000 UTC, rotation deadline is 2026-11-15 17:24:34.990698022 +0000 UTC Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.886372 4842 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6874h35m48.104333327s for next certificate rotation Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" event={"ID":"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b","Type":"ContainerStarted","Data":"b57b600b75ae9681e28f07052aa1148c25502ec64e1dd29ac9424ff3806f45de"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889091 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" event={"ID":"b7ceecfd-f2a9-4c82-85de-e32eb001eb2b","Type":"ContainerStarted","Data":"a7520f027b387f857942f3f76d19851d0d7a5cd9a741cd5666b24c66f48ef91e"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.889755 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.904504 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podStartSLOduration=130.904475115 podStartE2EDuration="2m10.904475115s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:46.885098044 +0000 UTC m=+152.262365956" watchObservedRunningTime="2026-02-02 06:48:46.904475115 +0000 UTC m=+152.281743027" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.906171 4842 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-z8q7b container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.906269 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" podUID="b7ceecfd-f2a9-4c82-85de-e32eb001eb2b" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.926740 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" event={"ID":"f2ee0e33-a160-4303-af00-0b145647f807","Type":"ContainerStarted","Data":"1668fa4ee8ccd649a60667059e75b5e87cc153d6a088ea4159a7ed346889e106"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.953047 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" event={"ID":"9ccdbc28-a0cd-4d92-afc6-9ba18f4ff3e8","Type":"ContainerStarted","Data":"5ca29137122a39b9ac957c77342534c76b8b34092851c7595d6f8f3c7cc5b828"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.955875 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5f9169ec9aff7d5034c7afdc8458f4af1bc2732017b6de7bc063ad3ed4561a8c"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d92225d2ec35a17728862e902cfa1ead30114e942875a61db9b6f6d198f4a6c9"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956611 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956875 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.956961 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.957136 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.957225 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:46 crc kubenswrapper[4842]: E0202 06:48:46.957394 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.457367268 +0000 UTC m=+152.834635180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.970676 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.970998 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.979431 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" event={"ID":"6d58ee7c-c176-4ddd-af48-d9406f4eac74","Type":"ContainerStarted","Data":"daa8a7174846994a23811ef2ed7deb05c3720d0567ad2065fea09b3d52e6f730"} Feb 02 06:48:46 crc kubenswrapper[4842]: I0202 06:48:46.985014 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rx6hm" podStartSLOduration=130.984991128 podStartE2EDuration="2m10.984991128s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:46.979945186 +0000 UTC m=+152.357213098" watchObservedRunningTime="2026-02-02 06:48:46.984991128 +0000 UTC m=+152.362259030" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.000385 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" event={"ID":"bf3383aa-e821-4389-b2f0-cc697ad4cc7a","Type":"ContainerStarted","Data":"d0bde4d8c2cd6144f08674c306929b9ff613065e33f6d8a0333e2008f2ca9c4d"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.011562 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" event={"ID":"1b0e61a0-72dd-4edd-8217-c7b157e2c38c","Type":"ContainerStarted","Data":"f7aea7e5c9085437bb918b8a8754534e83d2838ab9e4b1d44de64b0ff655b5e7"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.024150 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hv9fc" podStartSLOduration=131.024130478 podStartE2EDuration="2m11.024130478s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.020157202 +0000 UTC m=+152.397425104" watchObservedRunningTime="2026-02-02 06:48:47.024130478 +0000 UTC m=+152.401398380" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.035351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" event={"ID":"29629b99-9606-4830-9623-8c81cecbd0a9","Type":"ContainerStarted","Data":"e959fdbbf0951e98dbf0fb8a34fd65e9e378852406a95f0339e6592f27d19356"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.035405 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" event={"ID":"29629b99-9606-4830-9623-8c81cecbd0a9","Type":"ContainerStarted","Data":"c918c1f859ec2b36508dd23d07a42d9b1413d0bf48f4e9bd3000d0775f5c8c22"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.035427 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.047936 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.054093 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.058939 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.058976 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.059063 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.059147 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.059923 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.061005 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.560988282 +0000 UTC m=+152.938256194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.061116 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.079053 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gnmkq" podStartSLOduration=131.07902019 podStartE2EDuration="2m11.07902019s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.056206526 +0000 UTC m=+152.433474448" watchObservedRunningTime="2026-02-02 06:48:47.07902019 +0000 UTC m=+152.456288102" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.080485 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"community-operators-z5jt7\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.082502 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.113329 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"certified-operators-9mdpt\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.132486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" event={"ID":"27bce4a1-799c-4d40-900c-455eaba28398","Type":"ContainerStarted","Data":"2cd70c383102200c10b046ce7a0cd1c1f1076c2986f23a7769899d249ec23a02"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160248 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160560 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.160586 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.162028 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.662004764 +0000 UTC m=+153.039272676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.173308 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" podStartSLOduration=131.173292118 podStartE2EDuration="2m11.173292118s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.12889269 +0000 UTC m=+152.506160602" watchObservedRunningTime="2026-02-02 06:48:47.173292118 +0000 UTC m=+152.550560030" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.174706 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ck7h4" podStartSLOduration=131.174700082 podStartE2EDuration="2m11.174700082s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.161623414 +0000 UTC m=+152.538891316" watchObservedRunningTime="2026-02-02 06:48:47.174700082 +0000 UTC m=+152.551967994" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.190826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerStarted","Data":"04ffd2243b5849ee630dca21e47581a608d351fcdc3dc93a8251781dde7ea1c2"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.207753 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263377 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263426 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.263524 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.265038 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.765016813 +0000 UTC m=+153.142284715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.265117 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.265194 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.265255 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" event={"ID":"10f8b640-1372-484f-b42f-97e336fb2992","Type":"ContainerStarted","Data":"900ed9927278d1dc592519743974292fe020484c481cf898486b447ec27bf41e"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.285569 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" podStartSLOduration=131.285535321 podStartE2EDuration="2m11.285535321s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.282260682 +0000 UTC m=+152.659528594" watchObservedRunningTime="2026-02-02 06:48:47.285535321 +0000 UTC m=+152.662803233" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.295486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" event={"ID":"e95addab-99c5-499c-92bc-f13fd4870710","Type":"ContainerStarted","Data":"606f6007502976165b22ae007e25e33f729187b8fa70583e2fb41ce07404a6cb"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.320917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" event={"ID":"a8cad1e4-b070-477e-a20a-5cf8cb397e85","Type":"ContainerStarted","Data":"4acd1a33eb07c5ae32cf1b6d6c9698a092192e69a2e1034f70efacfa7093a85e"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.332341 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" podStartSLOduration=131.332322687 podStartE2EDuration="2m11.332322687s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.331185589 +0000 UTC m=+152.708453521" watchObservedRunningTime="2026-02-02 06:48:47.332322687 +0000 UTC m=+152.709590599" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.360381 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"community-operators-l9qkz\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.364426 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.379876 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.380562 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.880504646 +0000 UTC m=+153.257772558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.380934 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.388110 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.88809033 +0000 UTC m=+153.265358242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.394577 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zn7j9" podStartSLOduration=131.394551136 podStartE2EDuration="2m11.394551136s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.375760451 +0000 UTC m=+152.753028363" watchObservedRunningTime="2026-02-02 06:48:47.394551136 +0000 UTC m=+152.771819048" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.406542 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m2mqz" event={"ID":"14030278-3de4-4425-8308-813d4f7c0a2d","Type":"ContainerStarted","Data":"3f28817203371b56e81eb787ae3bd71bff7d3c630b4bc529c1ff7107d2cb9d14"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453019 4842 generic.go:334] "Generic (PLEG): container finished" podID="57b85eac-df63-4c81-abe6-3dba293df9c2" containerID="d2fd60f59fddc30897ba37779de20ce7fb25833d572dcc1de237b74148cf5af6" exitCode=0 Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453738 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerDied","Data":"d2fd60f59fddc30897ba37779de20ce7fb25833d572dcc1de237b74148cf5af6"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453765 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.453775 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" event={"ID":"57b85eac-df63-4c81-abe6-3dba293df9c2","Type":"ContainerStarted","Data":"3b4dbc3751ec24a7f4a8ae73a64ea7c63704029c634d0e7e87f555b8b9d21c56"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.477612 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kb6j9" event={"ID":"86a554b4-30b1-4521-8677-d1974308a379","Type":"ContainerStarted","Data":"de02899c60928ac9c2bedfa6fdefa8efa363483658a025a263c5ed9b9d6e0344"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.483269 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.484439 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:47.984418307 +0000 UTC m=+153.361686219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.503586 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" event={"ID":"bc8e3a2f-b630-40bf-865e-c7a035385730","Type":"ContainerStarted","Data":"fac8f7f747549abd71c8fb62a9d629c838529476ee3e56587ee47eb6820e973b"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.504395 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.518026 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"12493dcc1ca2c5aeb5273cd5a3222736513b0191aee70f6150b75b9bd0692df1"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.519686 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m2mqz" podStartSLOduration=8.519675333 podStartE2EDuration="8.519675333s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.519068158 +0000 UTC m=+152.896336070" watchObservedRunningTime="2026-02-02 06:48:47.519675333 +0000 UTC m=+152.896943245" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.521811 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" podStartSLOduration=131.521802344 podStartE2EDuration="2m11.521802344s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.47833381 +0000 UTC m=+152.855601722" watchObservedRunningTime="2026-02-02 06:48:47.521802344 +0000 UTC m=+152.899070256" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.565563 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.565796 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.566026 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" event={"ID":"42ff05d2-dda3-411f-bcee-816f87ce21b8","Type":"ContainerStarted","Data":"6c3f6c2ad3db40219c62ae4bfa566bc8dd708b5bae52e7331ee9209d340c103f"} Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.577068 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" podStartSLOduration=131.57704609500001 podStartE2EDuration="2m11.577046095s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.57643354 +0000 UTC m=+152.953701462" watchObservedRunningTime="2026-02-02 06:48:47.577046095 +0000 UTC m=+152.954314007" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.586503 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.588073 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.088061722 +0000 UTC m=+153.465329634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.624441 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n6n4t" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.630591 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" podStartSLOduration=131.630574714 podStartE2EDuration="2m11.630574714s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.6287733 +0000 UTC m=+153.006041212" watchObservedRunningTime="2026-02-02 06:48:47.630574714 +0000 UTC m=+153.007842616" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.671687 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" podStartSLOduration=131.671668861 podStartE2EDuration="2m11.671668861s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.670236816 +0000 UTC m=+153.047504728" watchObservedRunningTime="2026-02-02 06:48:47.671668861 +0000 UTC m=+153.048936773" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.687436 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.689124 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.189108674 +0000 UTC m=+153.566376586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.748539 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:47 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:47 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:47 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.749118 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.763117 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n9v5x" podStartSLOduration=131.763087339 podStartE2EDuration="2m11.763087339s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.707710835 +0000 UTC m=+153.084978747" watchObservedRunningTime="2026-02-02 06:48:47.763087339 +0000 UTC m=+153.140355251" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.793698 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.794182 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.294162873 +0000 UTC m=+153.671430785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.808692 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nz65j" podStartSLOduration=131.808671835 podStartE2EDuration="2m11.808671835s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.804791061 +0000 UTC m=+153.182058993" watchObservedRunningTime="2026-02-02 06:48:47.808671835 +0000 UTC m=+153.185939747" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.884697 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kb6j9" podStartSLOduration=8.88467854 podStartE2EDuration="8.88467854s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.848641455 +0000 UTC m=+153.225909377" watchObservedRunningTime="2026-02-02 06:48:47.88467854 +0000 UTC m=+153.261946452" Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.894660 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:47 crc kubenswrapper[4842]: E0202 06:48:47.895509 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.395488272 +0000 UTC m=+153.772756184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:47 crc kubenswrapper[4842]: I0202 06:48:47.987755 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" podStartSLOduration=131.98773747 podStartE2EDuration="2m11.98773747s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:47.949420371 +0000 UTC m=+153.326688283" watchObservedRunningTime="2026-02-02 06:48:47.98773747 +0000 UTC m=+153.365005382" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.006356 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.006961 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.506943607 +0000 UTC m=+153.884211519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.106976 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.107433 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.607398063 +0000 UTC m=+153.984665965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.107656 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.107942 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.607928156 +0000 UTC m=+153.985196058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.209876 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.210206 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.710170017 +0000 UTC m=+154.087437929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.292538 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n42rc" podStartSLOduration=132.292518485 podStartE2EDuration="2m12.292518485s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:48.003027942 +0000 UTC m=+153.380295854" watchObservedRunningTime="2026-02-02 06:48:48.292518485 +0000 UTC m=+153.669786397" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.293439 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.311807 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.312245 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.812229894 +0000 UTC m=+154.189497806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: W0202 06:48:48.364625 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69e94ec9_2a3b_4f85_a2b7_9e2f07359890.slice/crio-70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe WatchSource:0}: Error finding container 70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe: Status 404 returned error can't find the container with id 70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.418459 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.419602 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.449532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.450515 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.450833 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:48.950816596 +0000 UTC m=+154.328084508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.452001 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.463165 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551797 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551843 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551870 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.551912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.552206 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.052192306 +0000 UTC m=+154.429460228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.626729 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" event={"ID":"29629b99-9606-4830-9623-8c81cecbd0a9","Type":"ContainerStarted","Data":"945bf35ca92d015136320b8a3950b16173f51580a293bfcd2e5d9e048b095e63"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.636947 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653101 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653620 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653718 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.653860 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.153826113 +0000 UTC m=+154.531094025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.653941 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.654072 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.654342 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.654813 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.655089 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.155081313 +0000 UTC m=+154.532349225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: W0202 06:48:48.670638 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1b2c621_4f86_4e6b_a1ec_02fc1c8113cb.slice/crio-5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2 WatchSource:0}: Error finding container 5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2: Status 404 returned error can't find the container with id 5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2 Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.672302 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kb6j9" event={"ID":"86a554b4-30b1-4521-8677-d1974308a379","Type":"ContainerStarted","Data":"fc41b765d65b37e76e44dc881241523daded8e9098ec7d7110ba797ed3104865"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.681949 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" event={"ID":"6d58ee7c-c176-4ddd-af48-d9406f4eac74","Type":"ContainerStarted","Data":"464808a952f31995536654acb3e40458f6bdf1141a3826c80ebed195946cb223"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.681992 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" event={"ID":"6d58ee7c-c176-4ddd-af48-d9406f4eac74","Type":"ContainerStarted","Data":"fc4d29a0747e30e6da42f8f4d68ab645a40f4871c274145d334bf8b67df10e17"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.717186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"redhat-marketplace-m2j5m\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.736325 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5wqx2" event={"ID":"bf3383aa-e821-4389-b2f0-cc697ad4cc7a","Type":"ContainerStarted","Data":"d2239d85906d15d95c2bb6ad0bac6d6a4fa5210561871e6415d870eb980801b3"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.742720 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:48 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:48 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:48 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.742792 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.751245 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kgv82" podStartSLOduration=132.751197615 podStartE2EDuration="2m12.751197615s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:48.748645773 +0000 UTC m=+154.125913685" watchObservedRunningTime="2026-02-02 06:48:48.751197615 +0000 UTC m=+154.128465527" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.757021 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.758108 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.258090793 +0000 UTC m=+154.635358705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.778627 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c64e0ba18824303759c485f51437b75ee8a74e6a8d4b944cc24f13e144ecbe12"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.804020 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.807308 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.808245 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.821080 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"abc6df72344326b897c79dddbc777f5f79006f3f6d9b1ffb0a343fb984c0a1d8"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.821111 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.834575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z2sjd" event={"ID":"3c976fbc-6a91-494d-8d9e-1abe8119acf9","Type":"ContainerStarted","Data":"b605dc68a1537d20b52455afc18f9c6874d5e4629bdf33a784ebdbad41479788"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.834634 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z2sjd" event={"ID":"3c976fbc-6a91-494d-8d9e-1abe8119acf9","Type":"ContainerStarted","Data":"16d191e4c3d20a62bbed26f5b987517c0d907aaca750f0de1b19d841076ab695"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.835287 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.835382 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.842928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h6pjl" event={"ID":"27bce4a1-799c-4d40-900c-455eaba28398","Type":"ContainerStarted","Data":"23d9213aaa279c5ba91232afd0bcc353ea39ca32556a7b145af1723f6e7fdb89"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.847028 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.857846 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.859110 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.359093683 +0000 UTC m=+154.736361595 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.872750 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" event={"ID":"d8b4ca95-d26b-4f03-b095-b5096b6c3fbe","Type":"ContainerStarted","Data":"c506de4db9b35d63a13c91e4a7d3e3341423ad6d40865559c6c4ab2ba9c302bd"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.890898 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"b1939976e4e45123ce137bcc7b004566d4ab88bd67c2b1f240a5eee27ca61a78"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.892766 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerStarted","Data":"e77b162572adbddd868d73ee2b2382cf4886626b5d00d4cbd3b5a5a655acde51"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.905791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerStarted","Data":"70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.930960 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.931193 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z2sjd" podStartSLOduration=9.931177272 podStartE2EDuration="9.931177272s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:48.929801699 +0000 UTC m=+154.307069611" watchObservedRunningTime="2026-02-02 06:48:48.931177272 +0000 UTC m=+154.308445184" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.944997 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" event={"ID":"a8cad1e4-b070-477e-a20a-5cf8cb397e85","Type":"ContainerStarted","Data":"f519d41e2af002013e9c8a9601773eaefbe413047fe143e197cb8bf8279ee889"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.945034 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w66ps" event={"ID":"a8cad1e4-b070-477e-a20a-5cf8cb397e85","Type":"ContainerStarted","Data":"c8be51be5b86931f9db2151645a1d2b84329a6fafaf11f6e805c13133c9a85f5"} Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946716 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946748 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946807 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bzsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.946820 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.953068 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-z8q7b" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.960741 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.961110 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.961226 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: I0202 06:48:48.961256 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:48 crc kubenswrapper[4842]: E0202 06:48:48.962173 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.462152654 +0000 UTC m=+154.839420566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.047971 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2mfc5" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067281 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067448 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.067827 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.068951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.069975 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.56996012 +0000 UTC m=+154.947228032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.077633 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.138573 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"redhat-marketplace-m6ms7\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.159336 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.172923 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.173294 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.673264307 +0000 UTC m=+155.050532219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.285355 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.285817 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.785802388 +0000 UTC m=+155.163070300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.386262 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.386618 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.886584743 +0000 UTC m=+155.263852665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.488371 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.488821 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:49.988799964 +0000 UTC m=+155.366067876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.589720 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.589943 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.089903257 +0000 UTC m=+155.467171169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.589997 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.590403 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.090395889 +0000 UTC m=+155.467663791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.691350 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.691633 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.191588054 +0000 UTC m=+155.568855996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.737383 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:49 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:49 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:49 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.737478 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.793728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.794132 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.294114202 +0000 UTC m=+155.671382134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.895380 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.895682 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.395637836 +0000 UTC m=+155.772905778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.896115 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.896619 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.396604709 +0000 UTC m=+155.773872631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.949122 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" exitCode=0 Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.949173 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.949244 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerStarted","Data":"5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.950862 4842 generic.go:334] "Generic (PLEG): container finished" podID="671957e9-c40d-416d-8756-a4d7f0abc317" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" exitCode=0 Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.950934 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.952389 4842 generic.go:334] "Generic (PLEG): container finished" podID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" exitCode=0 Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.952465 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.955684 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerStarted","Data":"ad1fd21c691dc675b62fad95a6e7e8ad52ebcb62e20c4eefb0dc3125badfd973"} Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.957342 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bzsxn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.957379 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.998197 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.998372 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.498342638 +0000 UTC m=+155.875610560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:49 crc kubenswrapper[4842]: I0202 06:48:49.998822 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:49 crc kubenswrapper[4842]: E0202 06:48:49.999278 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.49926649 +0000 UTC m=+155.876534412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.100334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.100527 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.600499297 +0000 UTC m=+155.977767209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.101497 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.101831 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.601822959 +0000 UTC m=+155.979090871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.202674 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.202968 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.702904682 +0000 UTC m=+156.080172644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.252924 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.267282 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.268625 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.272616 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.281234 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.282364 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.292564 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.297907 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.306270 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.306689 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.80667125 +0000 UTC m=+156.183939162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.407951 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.408138 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.908110451 +0000 UTC m=+156.285378363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410790 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410854 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410930 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.410998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.411037 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.411073 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.411091 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.411464 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:50.911451442 +0000 UTC m=+156.288719344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.511758 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.511979 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512031 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512054 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512067 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512107 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512138 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.512534 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.512601 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.012587326 +0000 UTC m=+156.389855228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.514027 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.514535 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.514572 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.547609 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.549708 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"redhat-operators-wjfbs\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.560641 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"redhat-operators-5l5m7\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.613261 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.614181 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.114163131 +0000 UTC m=+156.491431043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.702024 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.709893 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.714498 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.714864 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.214849864 +0000 UTC m=+156.592117776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.741059 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:50 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:50 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:50 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.741114 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.816342 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.816682 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.316666415 +0000 UTC m=+156.693934327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.918275 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:50 crc kubenswrapper[4842]: E0202 06:48:50.918799 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.418775583 +0000 UTC m=+156.796043485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.920237 4842 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.974540 4842 generic.go:334] "Generic (PLEG): container finished" podID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" exitCode=0 Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.974609 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d"} Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.974633 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerStarted","Data":"d839d2fe1ddee6dc1ee5e5c2514aaebc941a9e75e08e10d40cd5d9caf2627fd2"} Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.980235 4842 generic.go:334] "Generic (PLEG): container finished" podID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" exitCode=0 Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.980282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0"} Feb 02 06:48:50 crc kubenswrapper[4842]: I0202 06:48:50.983824 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"fe859926fa724edf66bc512aa764eb09f7f815b4dfffab337894c3858c798aba"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.007839 4842 generic.go:334] "Generic (PLEG): container finished" podID="de569fea-56ca-4762-9a22-a12561c296b6" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" exitCode=0 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.007917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.007937 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerStarted","Data":"281d01870ece6a3181561fda9dfe308cdde10657dccb47ecb2c8628297416b48"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.012922 4842 generic.go:334] "Generic (PLEG): container finished" podID="5b43b464-5623-46bb-8097-65b505d08960" containerID="ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e" exitCode=0 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.013435 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerDied","Data":"ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e"} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.019462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.020902 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.520890621 +0000 UTC m=+156.898158533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.053694 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:48:51 crc kubenswrapper[4842]: W0202 06:48:51.117068 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be4c568_0aa4_4495_87b0_ec266872eb12.slice/crio-4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf WatchSource:0}: Error finding container 4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf: Status 404 returned error can't find the container with id 4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.121363 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.135953 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.635932912 +0000 UTC m=+157.013200824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.137409 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:48:51 crc kubenswrapper[4842]: W0202 06:48:51.164004 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99088cf9_5dcc_4837_943b_4deca45c1401.slice/crio-535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9 WatchSource:0}: Error finding container 535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9: Status 404 returned error can't find the container with id 535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.237055 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.237519 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.737504057 +0000 UTC m=+157.114771969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.338673 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.338860 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.838829376 +0000 UTC m=+157.216097288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.339170 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.339506 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.839494552 +0000 UTC m=+157.216762464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.440181 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.440323 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.940301118 +0000 UTC m=+157.317569030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.440453 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.440773 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:51.940766759 +0000 UTC m=+157.318034671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.541399 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.541577 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.041550645 +0000 UTC m=+157.418818557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.541791 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.542106 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.042098788 +0000 UTC m=+157.419366700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.643073 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.643273 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.143248481 +0000 UTC m=+157.520516393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.643398 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: E0202 06:48:51.643902 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-02 06:48:52.143892357 +0000 UTC m=+157.521160269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fz9q2" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.656766 4842 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-02T06:48:50.920248608Z","Handler":null,"Name":""} Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.676289 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.678905 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.681492 4842 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.681551 4842 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.683544 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.737872 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:51 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:51 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:51 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.737943 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.744736 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.749766 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.808737 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.808784 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.816865 4842 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5dc9g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]log ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]etcd ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/generic-apiserver-start-informers ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/max-in-flight-filter ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 02 06:48:51 crc kubenswrapper[4842]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/project.openshift.io-projectcache ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-startinformers ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 02 06:48:51 crc kubenswrapper[4842]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 02 06:48:51 crc kubenswrapper[4842]: livez check failed Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.816911 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" podUID="d8b4ca95-d26b-4f03-b095-b5096b6c3fbe" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.851132 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.892278 4842 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.892323 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.933289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fz9q2\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.989364 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.989996 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.991675 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.991955 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 02 06:48:51 crc kubenswrapper[4842]: I0202 06:48:51.995442 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.025164 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.042440 4842 generic.go:334] "Generic (PLEG): container finished" podID="99088cf9-5dcc-4837-943b-4deca45c1401" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" exitCode=0 Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.042497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.042520 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerStarted","Data":"535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.045182 4842 generic.go:334] "Generic (PLEG): container finished" podID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" exitCode=0 Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.045428 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.045578 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerStarted","Data":"4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.049543 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"f15d9506e5b40443687fbab2e9220f3d1c689180ce0206bdcef9286524b11e16"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.049587 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" event={"ID":"90441cdf-d9ad-48d8-a400-9c770bc81a60","Type":"ContainerStarted","Data":"5f3fba8d88c022599a1cddd263d757e5bf2ae550c1bea15862cacb8fa3958b74"} Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.054805 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.054935 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.057093 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jplm6" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.132801 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6fhk9" podStartSLOduration=13.13278362 podStartE2EDuration="13.13278362s" podCreationTimestamp="2026-02-02 06:48:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:52.13032206 +0000 UTC m=+157.507589972" watchObservedRunningTime="2026-02-02 06:48:52.13278362 +0000 UTC m=+157.510051532" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.158116 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.158167 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.158238 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.181839 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.325738 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419405 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419770 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419470 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.419422 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.421267 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.421485 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.424598 4842 patch_prober.go:28] interesting pod/console-f9d7485db-kmw8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.424642 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kmw8f" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.481469 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.491908 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:52 crc kubenswrapper[4842]: W0202 06:48:52.505796 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb76f3bc4_4824_422b_a14a_e7cd193ed30d.slice/crio-abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652 WatchSource:0}: Error finding container abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652: Status 404 returned error can't find the container with id abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652 Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.564813 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") pod \"5b43b464-5623-46bb-8097-65b505d08960\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.564890 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") pod \"5b43b464-5623-46bb-8097-65b505d08960\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.564994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") pod \"5b43b464-5623-46bb-8097-65b505d08960\" (UID: \"5b43b464-5623-46bb-8097-65b505d08960\") " Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.566787 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b43b464-5623-46bb-8097-65b505d08960" (UID: "5b43b464-5623-46bb-8097-65b505d08960"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.574428 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b43b464-5623-46bb-8097-65b505d08960" (UID: "5b43b464-5623-46bb-8097-65b505d08960"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.574629 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8" (OuterVolumeSpecName: "kube-api-access-p8ll8") pod "5b43b464-5623-46bb-8097-65b505d08960" (UID: "5b43b464-5623-46bb-8097-65b505d08960"). InnerVolumeSpecName "kube-api-access-p8ll8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.665976 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b43b464-5623-46bb-8097-65b505d08960-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.666005 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8ll8\" (UniqueName: \"kubernetes.io/projected/5b43b464-5623-46bb-8097-65b505d08960-kube-api-access-p8ll8\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.666016 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b43b464-5623-46bb-8097-65b505d08960-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.732917 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.740901 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:52 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:52 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:52 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.740938 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:52 crc kubenswrapper[4842]: I0202 06:48:52.836457 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.043864 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.069340 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.113598 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.113731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw" event={"ID":"5b43b464-5623-46bb-8097-65b505d08960","Type":"ContainerDied","Data":"5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34"} Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.113786 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d47aec119b9bfe1604e8d488d64ba28c81374dd8415db475287c6760b603f34" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.129491 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerStarted","Data":"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4"} Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.129557 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerStarted","Data":"abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652"} Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.131233 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.174997 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" podStartSLOduration=137.17498088 podStartE2EDuration="2m17.17498088s" podCreationTimestamp="2026-02-02 06:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:53.17252038 +0000 UTC m=+158.549788302" watchObservedRunningTime="2026-02-02 06:48:53.17498088 +0000 UTC m=+158.552248782" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.481083 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.735875 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:53 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:53 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:53 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:53 crc kubenswrapper[4842]: I0202 06:48:53.735927 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.147789 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2298664c-b466-4829-bccf-8f5a49efafdb","Type":"ContainerStarted","Data":"9672a6ddab80bc300da97b79bd14e40058a02f19d3a230db5eabe623ded153a0"} Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231105 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 06:48:54 crc kubenswrapper[4842]: E0202 06:48:54.231368 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b43b464-5623-46bb-8097-65b505d08960" containerName="collect-profiles" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231399 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b43b464-5623-46bb-8097-65b505d08960" containerName="collect-profiles" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231556 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b43b464-5623-46bb-8097-65b505d08960" containerName="collect-profiles" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.231952 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.234027 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.234442 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.239012 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.315476 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.315526 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.417292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.417344 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.417441 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.466811 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.562052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.736553 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:54 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:54 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:54 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:54 crc kubenswrapper[4842]: I0202 06:48:54.736619 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.044294 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.157570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerStarted","Data":"63df2dbe83d771de3ee2390f597aa7eb8663570b98da094b957d600da86a730a"} Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.163592 4842 generic.go:334] "Generic (PLEG): container finished" podID="2298664c-b466-4829-bccf-8f5a49efafdb" containerID="7a12e90bf3e43c95b6e601257b8c111b1524ce5b9f1e59ad387715a73494345a" exitCode=0 Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.163633 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2298664c-b466-4829-bccf-8f5a49efafdb","Type":"ContainerDied","Data":"7a12e90bf3e43c95b6e601257b8c111b1524ce5b9f1e59ad387715a73494345a"} Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.736514 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:55 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:55 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:55 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:55 crc kubenswrapper[4842]: I0202 06:48:55.736594 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.194307 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerStarted","Data":"57ac07575bb5778011d98303226e4e4e9a167afdaea5a5d819196b7d3fdab21c"} Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.211541 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.211482531 podStartE2EDuration="2.211482531s" podCreationTimestamp="2026-02-02 06:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:48:56.2093807 +0000 UTC m=+161.586648602" watchObservedRunningTime="2026-02-02 06:48:56.211482531 +0000 UTC m=+161.588750443" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.468405 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554396 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") pod \"2298664c-b466-4829-bccf-8f5a49efafdb\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554492 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2298664c-b466-4829-bccf-8f5a49efafdb" (UID: "2298664c-b466-4829-bccf-8f5a49efafdb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554588 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") pod \"2298664c-b466-4829-bccf-8f5a49efafdb\" (UID: \"2298664c-b466-4829-bccf-8f5a49efafdb\") " Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.554907 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2298664c-b466-4829-bccf-8f5a49efafdb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.562579 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2298664c-b466-4829-bccf-8f5a49efafdb" (UID: "2298664c-b466-4829-bccf-8f5a49efafdb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.659893 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2298664c-b466-4829-bccf-8f5a49efafdb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.735738 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:56 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:56 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:56 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.735789 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.815152 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:56 crc kubenswrapper[4842]: I0202 06:48:56.823473 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5dc9g" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.215542 4842 generic.go:334] "Generic (PLEG): container finished" podID="012f550e-3c84-45fc-8d26-c49c763e808f" containerID="57ac07575bb5778011d98303226e4e4e9a167afdaea5a5d819196b7d3fdab21c" exitCode=0 Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.215638 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerDied","Data":"57ac07575bb5778011d98303226e4e4e9a167afdaea5a5d819196b7d3fdab21c"} Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.230247 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.238520 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2298664c-b466-4829-bccf-8f5a49efafdb","Type":"ContainerDied","Data":"9672a6ddab80bc300da97b79bd14e40058a02f19d3a230db5eabe623ded153a0"} Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.238594 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9672a6ddab80bc300da97b79bd14e40058a02f19d3a230db5eabe623ded153a0" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.737038 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:57 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:57 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:57 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.737104 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:57 crc kubenswrapper[4842]: I0202 06:48:57.952905 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z2sjd" Feb 02 06:48:58 crc kubenswrapper[4842]: I0202 06:48:58.735700 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:58 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:58 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:58 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:58 crc kubenswrapper[4842]: I0202 06:48:58.736015 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.305292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.311065 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f6c3b51-669c-4c7b-a23a-ed68d139849e-metrics-certs\") pod \"network-metrics-daemon-9chjr\" (UID: \"4f6c3b51-669c-4c7b-a23a-ed68d139849e\") " pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.372762 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9chjr" Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.735426 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:48:59 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:48:59 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:48:59 crc kubenswrapper[4842]: healthz check failed Feb 02 06:48:59 crc kubenswrapper[4842]: I0202 06:48:59.735488 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:49:00 crc kubenswrapper[4842]: I0202 06:49:00.736455 4842 patch_prober.go:28] interesting pod/router-default-5444994796-j7bfz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 02 06:49:00 crc kubenswrapper[4842]: [-]has-synced failed: reason withheld Feb 02 06:49:00 crc kubenswrapper[4842]: [+]process-running ok Feb 02 06:49:00 crc kubenswrapper[4842]: healthz check failed Feb 02 06:49:00 crc kubenswrapper[4842]: I0202 06:49:00.736983 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-j7bfz" podUID="23594203-b17a-4d98-95da-a7c0e3a2ef4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 02 06:49:01 crc kubenswrapper[4842]: I0202 06:49:01.736262 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:49:01 crc kubenswrapper[4842]: I0202 06:49:01.739307 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-j7bfz" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.316433 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.361203 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") pod \"012f550e-3c84-45fc-8d26-c49c763e808f\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.361259 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") pod \"012f550e-3c84-45fc-8d26-c49c763e808f\" (UID: \"012f550e-3c84-45fc-8d26-c49c763e808f\") " Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.361406 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "012f550e-3c84-45fc-8d26-c49c763e808f" (UID: "012f550e-3c84-45fc-8d26-c49c763e808f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.362417 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/012f550e-3c84-45fc-8d26-c49c763e808f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.368435 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "012f550e-3c84-45fc-8d26-c49c763e808f" (UID: "012f550e-3c84-45fc-8d26-c49c763e808f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416023 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416051 4842 patch_prober.go:28] interesting pod/console-f9d7485db-kmw8f container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416086 4842 patch_prober.go:28] interesting pod/downloads-7954f5f757-pbtq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416082 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416099 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-kmw8f" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.24:8443/health\": dial tcp 10.217.0.24:8443: connect: connection refused" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.416138 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pbtq6" podUID="cc176201-02a2-46c0-903c-13943d989195" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 02 06:49:02 crc kubenswrapper[4842]: I0202 06:49:02.463999 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/012f550e-3c84-45fc-8d26-c49c763e808f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:03 crc kubenswrapper[4842]: I0202 06:49:03.270784 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"012f550e-3c84-45fc-8d26-c49c763e808f","Type":"ContainerDied","Data":"63df2dbe83d771de3ee2390f597aa7eb8663570b98da094b957d600da86a730a"} Feb 02 06:49:03 crc kubenswrapper[4842]: I0202 06:49:03.270812 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 02 06:49:03 crc kubenswrapper[4842]: I0202 06:49:03.270817 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63df2dbe83d771de3ee2390f597aa7eb8663570b98da094b957d600da86a730a" Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.226557 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.226745 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" containerID="cri-o://64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132" gracePeriod=30 Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.250353 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:49:04 crc kubenswrapper[4842]: I0202 06:49:04.250749 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" containerID="cri-o://ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e" gracePeriod=30 Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.294483 4842 generic.go:334] "Generic (PLEG): container finished" podID="c7352a46-964e-478a-a141-7b1f3d529b85" containerID="ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e" exitCode=0 Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.294601 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerDied","Data":"ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e"} Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.297343 4842 generic.go:334] "Generic (PLEG): container finished" podID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerID="64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132" exitCode=0 Feb 02 06:49:05 crc kubenswrapper[4842]: I0202 06:49:05.297379 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerDied","Data":"64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132"} Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.036272 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.147109 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.148982 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.424508 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-pbtq6" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.452756 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.470751 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.883014 4842 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-rssw5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.883079 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.972496 4842 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-brh4m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 02 06:49:12 crc kubenswrapper[4842]: I0202 06:49:12.972565 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 06:49:15 crc kubenswrapper[4842]: E0202 06:49:15.314501 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 02 06:49:15 crc kubenswrapper[4842]: E0202 06:49:15.315165 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gfrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5l5m7_openshift-marketplace(99088cf9-5dcc-4837-943b-4deca45c1401): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:15 crc kubenswrapper[4842]: E0202 06:49:15.316571 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5l5m7" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.606130 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5l5m7" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.681798 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.682043 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrqbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l9qkz_openshift-marketplace(c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.683363 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l9qkz" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.690294 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.690445 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwfcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-m6ms7_openshift-marketplace(eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:16 crc kubenswrapper[4842]: E0202 06:49:16.691724 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-m6ms7" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.043772 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-m6ms7" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.043780 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-l9qkz" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.122651 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.122974 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150154 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150363 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150374 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150387 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150405 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150412 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012f550e-3c84-45fc-8d26-c49c763e808f" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150417 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="012f550e-3c84-45fc-8d26-c49c763e808f" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.150424 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2298664c-b466-4829-bccf-8f5a49efafdb" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150430 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2298664c-b466-4829-bccf-8f5a49efafdb" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150521 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" containerName="controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150532 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2298664c-b466-4829-bccf-8f5a49efafdb" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150538 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" containerName="route-controller-manager" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150545 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="012f550e-3c84-45fc-8d26-c49c763e808f" containerName="pruner" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.150873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.174808 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177044 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177046 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177153 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q662f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z5jt7_openshift-marketplace(69e94ec9-2a3b-4f85-a2b7-9e2f07359890): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.177209 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtcmj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9mdpt_openshift-marketplace(0401543d-1af2-45fd-a8e1-05cec083bdd7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.178500 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9mdpt" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.178516 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z5jt7" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.195210 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.195360 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8v2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-74vp9_openshift-marketplace(671957e9-c40d-416d-8756-a4d7f0abc317): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.196581 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-74vp9" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204390 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204426 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204461 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204482 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204522 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204618 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204667 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") pod \"3a1b2909-d542-48b0-8729-294f7950ab2d\" (UID: \"3a1b2909-d542-48b0-8729-294f7950ab2d\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204691 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") pod \"c7352a46-964e-478a-a141-7b1f3d529b85\" (UID: \"c7352a46-964e-478a-a141-7b1f3d529b85\") " Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204920 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204964 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.204997 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205020 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205143 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config" (OuterVolumeSpecName: "config") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205485 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.205557 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.206522 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config" (OuterVolumeSpecName: "config") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.207074 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca" (OuterVolumeSpecName: "client-ca") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.212495 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.213947 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.214694 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28" (OuterVolumeSpecName: "kube-api-access-wpp28") pod "c7352a46-964e-478a-a141-7b1f3d529b85" (UID: "c7352a46-964e-478a-a141-7b1f3d529b85"). InnerVolumeSpecName "kube-api-access-wpp28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.224703 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb" (OuterVolumeSpecName: "kube-api-access-8j2bb") pod "3a1b2909-d542-48b0-8729-294f7950ab2d" (UID: "3a1b2909-d542-48b0-8729-294f7950ab2d"). InnerVolumeSpecName "kube-api-access-8j2bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.305974 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306080 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306137 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306176 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306188 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpp28\" (UniqueName: \"kubernetes.io/projected/c7352a46-964e-478a-a141-7b1f3d529b85-kube-api-access-wpp28\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306223 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7352a46-964e-478a-a141-7b1f3d529b85-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306232 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a1b2909-d542-48b0-8729-294f7950ab2d-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306243 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306252 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j2bb\" (UniqueName: \"kubernetes.io/projected/3a1b2909-d542-48b0-8729-294f7950ab2d-kube-api-access-8j2bb\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306260 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a1b2909-d542-48b0-8729-294f7950ab2d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306268 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.306292 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c7352a46-964e-478a-a141-7b1f3d529b85-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.307157 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.307265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.309883 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.323427 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"route-controller-manager-7966d87dbf-rsdxf\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.388819 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerStarted","Data":"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae"} Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.391764 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.391849 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-rssw5" event={"ID":"c7352a46-964e-478a-a141-7b1f3d529b85","Type":"ContainerDied","Data":"44ebd0c802db6062893241169e4706979097a692764a061e2fde6a02c71197ca"} Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.391906 4842 scope.go:117] "RemoveContainer" containerID="ba883d0dbff2f8d72bcfa41bc18c26959b10543f2aee551d9c4325bf6653ef2e" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.393377 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" event={"ID":"3a1b2909-d542-48b0-8729-294f7950ab2d","Type":"ContainerDied","Data":"643cd1b7543d0a40a6f2280aca5f3b03741bd2063f49a6310b7a1671fc67d3cc"} Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.393392 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.399787 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerStarted","Data":"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6"} Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.402704 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-74vp9" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.402905 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-z5jt7" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" Feb 02 06:49:18 crc kubenswrapper[4842]: E0202 06:49:18.404714 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9mdpt" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.421008 4842 scope.go:117] "RemoveContainer" containerID="64198cd4ed9c3f648a83a0d5cc2017b0e62648734deb3f42088a21d4a035b132" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.475530 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.499204 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9chjr"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.513938 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.515999 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-brh4m"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.521371 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.525239 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-rssw5"] Feb 02 06:49:18 crc kubenswrapper[4842]: I0202 06:49:18.890519 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.414498 4842 generic.go:334] "Generic (PLEG): container finished" podID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" exitCode=0 Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.414570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.420950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9chjr" event={"ID":"4f6c3b51-669c-4c7b-a23a-ed68d139849e","Type":"ContainerStarted","Data":"b486737ddedac7129b1733a35834494a81d73278298468bd753a6886d46b395d"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.421016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9chjr" event={"ID":"4f6c3b51-669c-4c7b-a23a-ed68d139849e","Type":"ContainerStarted","Data":"f49188ca76e1ac3c0015ec96901f860985577da243e613ed7fc520adbafd049c"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.421038 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9chjr" event={"ID":"4f6c3b51-669c-4c7b-a23a-ed68d139849e","Type":"ContainerStarted","Data":"d50abf0ae8daa7ec43e532feea59b20a173ab6c4ee290954300cc157f434f3d3"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.424487 4842 generic.go:334] "Generic (PLEG): container finished" podID="de569fea-56ca-4762-9a22-a12561c296b6" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" exitCode=0 Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.424549 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.468052 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1b2909-d542-48b0-8729-294f7950ab2d" path="/var/lib/kubelet/pods/3a1b2909-d542-48b0-8729-294f7950ab2d/volumes" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.473452 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7352a46-964e-478a-a141-7b1f3d529b85" path="/var/lib/kubelet/pods/c7352a46-964e-478a-a141-7b1f3d529b85/volumes" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.479071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerStarted","Data":"bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.479119 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.479137 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerStarted","Data":"7354bc8151db2d16116ce4466471dd76aa94cf92497c574dcb299c2e66d9e17c"} Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.490894 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9chjr" podStartSLOduration=164.49085246 podStartE2EDuration="2m44.49085246s" podCreationTimestamp="2026-02-02 06:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:19.472087875 +0000 UTC m=+184.849355847" watchObservedRunningTime="2026-02-02 06:49:19.49085246 +0000 UTC m=+184.868120432" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.532616 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" podStartSLOduration=15.532587913 podStartE2EDuration="15.532587913s" podCreationTimestamp="2026-02-02 06:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:19.528801401 +0000 UTC m=+184.906069373" watchObservedRunningTime="2026-02-02 06:49:19.532587913 +0000 UTC m=+184.909855825" Feb 02 06:49:19 crc kubenswrapper[4842]: I0202 06:49:19.571343 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.467640 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerStarted","Data":"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025"} Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.471928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerStarted","Data":"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b"} Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.498704 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m2j5m" podStartSLOduration=3.676823826 podStartE2EDuration="32.498688665s" podCreationTimestamp="2026-02-02 06:48:48 +0000 UTC" firstStartedPulling="2026-02-02 06:48:51.009906384 +0000 UTC m=+156.387174296" lastFinishedPulling="2026-02-02 06:49:19.831771223 +0000 UTC m=+185.209039135" observedRunningTime="2026-02-02 06:49:20.495056117 +0000 UTC m=+185.872324049" watchObservedRunningTime="2026-02-02 06:49:20.498688665 +0000 UTC m=+185.875956577" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.518985 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wjfbs" podStartSLOduration=2.657177434 podStartE2EDuration="30.518971077s" podCreationTimestamp="2026-02-02 06:48:50 +0000 UTC" firstStartedPulling="2026-02-02 06:48:52.051773105 +0000 UTC m=+157.429041017" lastFinishedPulling="2026-02-02 06:49:19.913566748 +0000 UTC m=+185.290834660" observedRunningTime="2026-02-02 06:49:20.515682008 +0000 UTC m=+185.892949930" watchObservedRunningTime="2026-02-02 06:49:20.518971077 +0000 UTC m=+185.896238989" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.672863 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.673632 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.678397 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.678574 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.679044 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.679059 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.679358 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.680765 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.688242 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.690564 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.739050 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.739247 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740346 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740436 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740504 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.740552 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841522 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841578 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841650 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841695 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.841720 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.843562 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.843612 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.843614 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.854943 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:20 crc kubenswrapper[4842]: I0202 06:49:20.869511 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"controller-manager-547cbbd8cb-cglf6\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:21 crc kubenswrapper[4842]: I0202 06:49:21.044599 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:21 crc kubenswrapper[4842]: I0202 06:49:21.480975 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:21 crc kubenswrapper[4842]: W0202 06:49:21.490160 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03b2bbfb_c88f_4cbc_b071_c6275ae02a03.slice/crio-ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1 WatchSource:0}: Error finding container ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1: Status 404 returned error can't find the container with id ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1 Feb 02 06:49:21 crc kubenswrapper[4842]: I0202 06:49:21.995431 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wjfbs" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" probeResult="failure" output=< Feb 02 06:49:21 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 06:49:21 crc kubenswrapper[4842]: > Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.485108 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerStarted","Data":"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e"} Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.485158 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerStarted","Data":"ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1"} Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.504719 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" podStartSLOduration=18.504702032 podStartE2EDuration="18.504702032s" podCreationTimestamp="2026-02-02 06:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:22.502894778 +0000 UTC m=+187.880162700" watchObservedRunningTime="2026-02-02 06:49:22.504702032 +0000 UTC m=+187.881969944" Feb 02 06:49:22 crc kubenswrapper[4842]: I0202 06:49:22.824964 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-wv68j" Feb 02 06:49:23 crc kubenswrapper[4842]: I0202 06:49:23.490570 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:23 crc kubenswrapper[4842]: I0202 06:49:23.496505 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:23 crc kubenswrapper[4842]: I0202 06:49:23.768906 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.168393 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.259360 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.259666 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" containerID="cri-o://bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e" gracePeriod=30 Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.501653 4842 generic.go:334] "Generic (PLEG): container finished" podID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerID="bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e" exitCode=0 Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.501739 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerDied","Data":"bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e"} Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.658660 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.792929 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793142 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793180 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") pod \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\" (UID: \"b8224d52-6c96-4873-a87c-1f9c6ad87bd3\") " Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.793851 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca" (OuterVolumeSpecName: "client-ca") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.794027 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config" (OuterVolumeSpecName: "config") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.801026 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.805707 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq" (OuterVolumeSpecName: "kube-api-access-l99hq") pod "b8224d52-6c96-4873-a87c-1f9c6ad87bd3" (UID: "b8224d52-6c96-4873-a87c-1f9c6ad87bd3"). InnerVolumeSpecName "kube-api-access-l99hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895056 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895108 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895124 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99hq\" (UniqueName: \"kubernetes.io/projected/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-kube-api-access-l99hq\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:24 crc kubenswrapper[4842]: I0202 06:49:24.895170 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8224d52-6c96-4873-a87c-1f9c6ad87bd3-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520368 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" event={"ID":"b8224d52-6c96-4873-a87c-1f9c6ad87bd3","Type":"ContainerDied","Data":"7354bc8151db2d16116ce4466471dd76aa94cf92497c574dcb299c2e66d9e17c"} Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520426 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520453 4842 scope.go:117] "RemoveContainer" containerID="bab940da589e780495eea930c1901067c60a2e6f9abdefe27a221f39280d831e" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.520587 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" containerID="cri-o://4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" gracePeriod=30 Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.558483 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.563625 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7966d87dbf-rsdxf"] Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.673752 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:25 crc kubenswrapper[4842]: E0202 06:49:25.674464 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.674544 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.674706 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" containerName="route-controller-manager" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.675154 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.679403 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.679428 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.681920 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.681940 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.682105 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.682183 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.686493 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830304 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830361 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830407 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.830454 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.931318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.932725 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.931371 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.932817 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.932861 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.935285 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.944838 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:25 crc kubenswrapper[4842]: I0202 06:49:25.947835 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"route-controller-manager-68654ddbd-nd2df\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.001519 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033123 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033179 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033276 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033298 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.033338 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") pod \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\" (UID: \"03b2bbfb-c88f-4cbc-b071-c6275ae02a03\") " Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.034013 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca" (OuterVolumeSpecName: "client-ca") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.034044 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.034497 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config" (OuterVolumeSpecName: "config") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.036800 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf" (OuterVolumeSpecName: "kube-api-access-grxxf") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "kube-api-access-grxxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.036981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03b2bbfb-c88f-4cbc-b071-c6275ae02a03" (UID: "03b2bbfb-c88f-4cbc-b071-c6275ae02a03"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.040365 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134359 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grxxf\" (UniqueName: \"kubernetes.io/projected/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-kube-api-access-grxxf\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134384 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134393 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134405 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.134414 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b2bbfb-c88f-4cbc-b071-c6275ae02a03-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.223484 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.528499 4842 generic.go:334] "Generic (PLEG): container finished" podID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" exitCode=0 Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.528563 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.528579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerDied","Data":"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.529120 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-547cbbd8cb-cglf6" event={"ID":"03b2bbfb-c88f-4cbc-b071-c6275ae02a03","Type":"ContainerDied","Data":"ed008bd3a381e1a2b8c2b4471536e3df5c7ceab67537fb81461cedd0605070b1"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.529159 4842 scope.go:117] "RemoveContainer" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.532946 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerStarted","Data":"55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.532996 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerStarted","Data":"0429779ecc8d7f354927858d9f829de9c008478a695454154ec2b53a1da0abb2"} Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.533139 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.548380 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" podStartSLOduration=2.5483590830000002 podStartE2EDuration="2.548359083s" podCreationTimestamp="2026-02-02 06:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:26.54781589 +0000 UTC m=+191.925083802" watchObservedRunningTime="2026-02-02 06:49:26.548359083 +0000 UTC m=+191.925626995" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.558402 4842 scope.go:117] "RemoveContainer" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" Feb 02 06:49:26 crc kubenswrapper[4842]: E0202 06:49:26.558978 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e\": container with ID starting with 4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e not found: ID does not exist" containerID="4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.559020 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e"} err="failed to get container status \"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e\": rpc error: code = NotFound desc = could not find container \"4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e\": container with ID starting with 4d79b5384b066d0a78742e3704ce1509026469f4826ef043da30720693d3be6e not found: ID does not exist" Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.575960 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.581344 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-547cbbd8cb-cglf6"] Feb 02 06:49:26 crc kubenswrapper[4842]: I0202 06:49:26.910951 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.444998 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" path="/var/lib/kubelet/pods/03b2bbfb-c88f-4cbc-b071-c6275ae02a03/volumes" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.446756 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8224d52-6c96-4873-a87c-1f9c6ad87bd3" path="/var/lib/kubelet/pods/b8224d52-6c96-4873-a87c-1f9c6ad87bd3/volumes" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.687396 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:27 crc kubenswrapper[4842]: E0202 06:49:27.687638 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.687652 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.687781 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b2bbfb-c88f-4cbc-b071-c6275ae02a03" containerName="controller-manager" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.688182 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.692434 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.693025 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.693758 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.694203 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.694365 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.696404 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.700540 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.707064 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861438 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861491 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861524 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.861732 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962642 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962725 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962758 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962786 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.962811 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.963943 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.964600 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.965368 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.971367 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:27 crc kubenswrapper[4842]: I0202 06:49:27.978753 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"controller-manager-99f997678-95hv6\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.013775 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.244504 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:28 crc kubenswrapper[4842]: W0202 06:49:28.253735 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b226528_cbee_4e1b_a63a_2e9cb152a9a5.slice/crio-ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f WatchSource:0}: Error finding container ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f: Status 404 returned error can't find the container with id ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.546917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerStarted","Data":"460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6"} Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.547284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerStarted","Data":"ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f"} Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.547306 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.556066 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.570255 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" podStartSLOduration=4.570233764 podStartE2EDuration="4.570233764s" podCreationTimestamp="2026-02-02 06:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:28.565649132 +0000 UTC m=+193.942917054" watchObservedRunningTime="2026-02-02 06:49:28.570233764 +0000 UTC m=+193.947501676" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.805104 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.805260 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:28 crc kubenswrapper[4842]: I0202 06:49:28.854257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:29 crc kubenswrapper[4842]: I0202 06:49:29.613269 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.434748 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.435609 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.438563 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.483241 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.483300 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.608898 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.609027 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.691417 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.709889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.709962 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.710040 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.751685 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.777666 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.791179 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:30 crc kubenswrapper[4842]: I0202 06:49:30.835522 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.336249 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 02 06:49:31 crc kubenswrapper[4842]: W0202 06:49:31.340206 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcedde76f_459c_4b6b_8535_407c5e392ae7.slice/crio-a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4 WatchSource:0}: Error finding container a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4: Status 404 returned error can't find the container with id a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4 Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.571808 4842 generic.go:334] "Generic (PLEG): container finished" podID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" exitCode=0 Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.571877 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca"} Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.575807 4842 generic.go:334] "Generic (PLEG): container finished" podID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" exitCode=0 Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.575865 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd"} Feb 02 06:49:31 crc kubenswrapper[4842]: I0202 06:49:31.578521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cedde76f-459c-4b6b-8535-407c5e392ae7","Type":"ContainerStarted","Data":"a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.585649 4842 generic.go:334] "Generic (PLEG): container finished" podID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerID="7d3b218c1e52bef522f13c85d510d4be2ae307bc8a91ffd26af387612387100e" exitCode=0 Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.585713 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cedde76f-459c-4b6b-8535-407c5e392ae7","Type":"ContainerDied","Data":"7d3b218c1e52bef522f13c85d510d4be2ae307bc8a91ffd26af387612387100e"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.588984 4842 generic.go:334] "Generic (PLEG): container finished" podID="671957e9-c40d-416d-8756-a4d7f0abc317" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" exitCode=0 Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.589062 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.590803 4842 generic.go:334] "Generic (PLEG): container finished" podID="99088cf9-5dcc-4837-943b-4deca45c1401" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" exitCode=0 Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.590864 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.597783 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerStarted","Data":"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.600441 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerStarted","Data":"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6"} Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.652902 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m6ms7" podStartSLOduration=3.5974510950000003 podStartE2EDuration="44.652883216s" podCreationTimestamp="2026-02-02 06:48:48 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.977085078 +0000 UTC m=+156.354352990" lastFinishedPulling="2026-02-02 06:49:32.032517199 +0000 UTC m=+197.409785111" observedRunningTime="2026-02-02 06:49:32.652082556 +0000 UTC m=+198.029350468" watchObservedRunningTime="2026-02-02 06:49:32.652883216 +0000 UTC m=+198.030151128" Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.669510 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9mdpt" podStartSLOduration=5.581205089 podStartE2EDuration="46.66949272s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.982042998 +0000 UTC m=+156.359310910" lastFinishedPulling="2026-02-02 06:49:32.070330619 +0000 UTC m=+197.447598541" observedRunningTime="2026-02-02 06:49:32.665979284 +0000 UTC m=+198.043247206" watchObservedRunningTime="2026-02-02 06:49:32.66949272 +0000 UTC m=+198.046760632" Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.681983 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:49:32 crc kubenswrapper[4842]: I0202 06:49:32.682311 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wjfbs" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" containerID="cri-o://e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" gracePeriod=2 Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.162695 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.347969 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") pod \"7be4c568-0aa4-4495-87b0-ec266872eb12\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.348105 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") pod \"7be4c568-0aa4-4495-87b0-ec266872eb12\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.348157 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") pod \"7be4c568-0aa4-4495-87b0-ec266872eb12\" (UID: \"7be4c568-0aa4-4495-87b0-ec266872eb12\") " Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.350094 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities" (OuterVolumeSpecName: "utilities") pod "7be4c568-0aa4-4495-87b0-ec266872eb12" (UID: "7be4c568-0aa4-4495-87b0-ec266872eb12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.353379 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2" (OuterVolumeSpecName: "kube-api-access-8zgw2") pod "7be4c568-0aa4-4495-87b0-ec266872eb12" (UID: "7be4c568-0aa4-4495-87b0-ec266872eb12"). InnerVolumeSpecName "kube-api-access-8zgw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.452274 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.452312 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zgw2\" (UniqueName: \"kubernetes.io/projected/7be4c568-0aa4-4495-87b0-ec266872eb12-kube-api-access-8zgw2\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.482428 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7be4c568-0aa4-4495-87b0-ec266872eb12" (UID: "7be4c568-0aa4-4495-87b0-ec266872eb12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.553257 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7be4c568-0aa4-4495-87b0-ec266872eb12-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.607312 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerStarted","Data":"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.609302 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerStarted","Data":"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.610994 4842 generic.go:334] "Generic (PLEG): container finished" podID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" exitCode=0 Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611142 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wjfbs" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611149 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611375 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wjfbs" event={"ID":"7be4c568-0aa4-4495-87b0-ec266872eb12","Type":"ContainerDied","Data":"4d9e0a84da8f191972cd048e101e3cd6029560ea1537fa6b0b79bb80a6aa52cf"} Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.611399 4842 scope.go:117] "RemoveContainer" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.626920 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-74vp9" podStartSLOduration=4.756231769 podStartE2EDuration="47.626905899s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.17562678 +0000 UTC m=+155.552894722" lastFinishedPulling="2026-02-02 06:49:33.04630094 +0000 UTC m=+198.423568852" observedRunningTime="2026-02-02 06:49:33.621914777 +0000 UTC m=+198.999182689" watchObservedRunningTime="2026-02-02 06:49:33.626905899 +0000 UTC m=+199.004173801" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.633576 4842 scope.go:117] "RemoveContainer" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.643707 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5l5m7" podStartSLOduration=2.690838227 podStartE2EDuration="43.643677107s" podCreationTimestamp="2026-02-02 06:48:50 +0000 UTC" firstStartedPulling="2026-02-02 06:48:52.044198951 +0000 UTC m=+157.421466863" lastFinishedPulling="2026-02-02 06:49:32.997037821 +0000 UTC m=+198.374305743" observedRunningTime="2026-02-02 06:49:33.641366401 +0000 UTC m=+199.018634313" watchObservedRunningTime="2026-02-02 06:49:33.643677107 +0000 UTC m=+199.020945019" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.653277 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.657588 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wjfbs"] Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.670041 4842 scope.go:117] "RemoveContainer" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.687556 4842 scope.go:117] "RemoveContainer" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" Feb 02 06:49:33 crc kubenswrapper[4842]: E0202 06:49:33.688451 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b\": container with ID starting with e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b not found: ID does not exist" containerID="e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.688502 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b"} err="failed to get container status \"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b\": rpc error: code = NotFound desc = could not find container \"e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b\": container with ID starting with e936be960fc6a4acd631d5e4fcc059849d751995376968cab91ef3cd5907201b not found: ID does not exist" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.688536 4842 scope.go:117] "RemoveContainer" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" Feb 02 06:49:33 crc kubenswrapper[4842]: E0202 06:49:33.689595 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6\": container with ID starting with 7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6 not found: ID does not exist" containerID="7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.689682 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6"} err="failed to get container status \"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6\": rpc error: code = NotFound desc = could not find container \"7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6\": container with ID starting with 7631b0b59937c4a2a88980f2a0026660fe847cb4cbe41b4698eeef6e106359e6 not found: ID does not exist" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.689741 4842 scope.go:117] "RemoveContainer" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" Feb 02 06:49:33 crc kubenswrapper[4842]: E0202 06:49:33.690489 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f\": container with ID starting with e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f not found: ID does not exist" containerID="e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.690537 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f"} err="failed to get container status \"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f\": rpc error: code = NotFound desc = could not find container \"e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f\": container with ID starting with e5acdc10177108fa441e86a0649b2035781aef8bfbfa243aa0504a82b02bbf9f not found: ID does not exist" Feb 02 06:49:33 crc kubenswrapper[4842]: I0202 06:49:33.954225 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060532 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") pod \"cedde76f-459c-4b6b-8535-407c5e392ae7\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060635 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") pod \"cedde76f-459c-4b6b-8535-407c5e392ae7\" (UID: \"cedde76f-459c-4b6b-8535-407c5e392ae7\") " Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060751 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cedde76f-459c-4b6b-8535-407c5e392ae7" (UID: "cedde76f-459c-4b6b-8535-407c5e392ae7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.060940 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cedde76f-459c-4b6b-8535-407c5e392ae7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.063589 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cedde76f-459c-4b6b-8535-407c5e392ae7" (UID: "cedde76f-459c-4b6b-8535-407c5e392ae7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.161927 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cedde76f-459c-4b6b-8535-407c5e392ae7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.618460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cedde76f-459c-4b6b-8535-407c5e392ae7","Type":"ContainerDied","Data":"a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4"} Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.618502 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a41fed2e4b794d72cb0daf4150c5e8b6c1d27aef982c793474fc7005b5b1b4" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.618512 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.621484 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" exitCode=0 Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.621572 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568"} Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.623789 4842 generic.go:334] "Generic (PLEG): container finished" podID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" exitCode=0 Feb 02 06:49:34 crc kubenswrapper[4842]: I0202 06:49:34.623819 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35"} Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.440051 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" path="/var/lib/kubelet/pods/7be4c568-0aa4-4495-87b0-ec266872eb12/volumes" Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.630303 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerStarted","Data":"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926"} Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.632192 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerStarted","Data":"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946"} Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.647173 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l9qkz" podStartSLOduration=4.794729785 podStartE2EDuration="49.647154821s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:50.176267545 +0000 UTC m=+155.553535487" lastFinishedPulling="2026-02-02 06:49:35.028692611 +0000 UTC m=+200.405960523" observedRunningTime="2026-02-02 06:49:35.645719616 +0000 UTC m=+201.022987528" watchObservedRunningTime="2026-02-02 06:49:35.647154821 +0000 UTC m=+201.024422733" Feb 02 06:49:35 crc kubenswrapper[4842]: I0202 06:49:35.662333 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z5jt7" podStartSLOduration=3.524232864 podStartE2EDuration="49.6623144s" podCreationTimestamp="2026-02-02 06:48:46 +0000 UTC" firstStartedPulling="2026-02-02 06:48:48.930635679 +0000 UTC m=+154.307903581" lastFinishedPulling="2026-02-02 06:49:35.068717205 +0000 UTC m=+200.445985117" observedRunningTime="2026-02-02 06:49:35.6610903 +0000 UTC m=+201.038358212" watchObservedRunningTime="2026-02-02 06:49:35.6623144 +0000 UTC m=+201.039582312" Feb 02 06:49:36 crc kubenswrapper[4842]: I0202 06:49:36.812103 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:36 crc kubenswrapper[4842]: I0202 06:49:36.812164 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:36 crc kubenswrapper[4842]: I0202 06:49:36.853271 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.209323 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.209409 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.258339 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.365842 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.366133 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.419501 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.505440 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.505480 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.561610 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.710132 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.827775 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828111 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828132 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828152 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-content" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828165 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-content" Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828187 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerName="pruner" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828201 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerName="pruner" Feb 02 06:49:37 crc kubenswrapper[4842]: E0202 06:49:37.828267 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-utilities" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828286 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="extract-utilities" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828512 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cedde76f-459c-4b6b-8535-407c5e392ae7" containerName="pruner" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.828536 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be4c568-0aa4-4495-87b0-ec266872eb12" containerName="registry-server" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.829104 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.831754 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.831905 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 02 06:49:37 crc kubenswrapper[4842]: I0202 06:49:37.869537 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.010663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.010743 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.010974 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112277 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112342 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112729 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.112772 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.132047 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"installer-9-crc\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.146415 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.571412 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 02 06:49:38 crc kubenswrapper[4842]: W0202 06:49:38.574941 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podea82b6bc_5c1e_496e_8501_45fdb7220cbb.slice/crio-0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200 WatchSource:0}: Error finding container 0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200: Status 404 returned error can't find the container with id 0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200 Feb 02 06:49:38 crc kubenswrapper[4842]: I0202 06:49:38.647245 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerStarted","Data":"0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200"} Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.160396 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.161775 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.203929 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.656029 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerStarted","Data":"240ef4d9719e0e125f80aaba75a288ed11f634bda46b01e82f75011b4bb97529"} Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.683614 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.683585608 podStartE2EDuration="2.683585608s" podCreationTimestamp="2026-02-02 06:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:39.678624407 +0000 UTC m=+205.055892369" watchObservedRunningTime="2026-02-02 06:49:39.683585608 +0000 UTC m=+205.060853560" Feb 02 06:49:39 crc kubenswrapper[4842]: I0202 06:49:39.724475 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:40 crc kubenswrapper[4842]: I0202 06:49:40.702639 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:40 crc kubenswrapper[4842]: I0202 06:49:40.703100 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:40 crc kubenswrapper[4842]: I0202 06:49:40.765806 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.096768 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.097013 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9mdpt" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" containerID="cri-o://78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" gracePeriod=2 Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.535590 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.659608 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") pod \"0401543d-1af2-45fd-a8e1-05cec083bdd7\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.659710 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") pod \"0401543d-1af2-45fd-a8e1-05cec083bdd7\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.659843 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") pod \"0401543d-1af2-45fd-a8e1-05cec083bdd7\" (UID: \"0401543d-1af2-45fd-a8e1-05cec083bdd7\") " Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.661701 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities" (OuterVolumeSpecName: "utilities") pod "0401543d-1af2-45fd-a8e1-05cec083bdd7" (UID: "0401543d-1af2-45fd-a8e1-05cec083bdd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.665666 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj" (OuterVolumeSpecName: "kube-api-access-dtcmj") pod "0401543d-1af2-45fd-a8e1-05cec083bdd7" (UID: "0401543d-1af2-45fd-a8e1-05cec083bdd7"). InnerVolumeSpecName "kube-api-access-dtcmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687410 4842 generic.go:334] "Generic (PLEG): container finished" podID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" exitCode=0 Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687504 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9mdpt" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687648 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6"} Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687792 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9mdpt" event={"ID":"0401543d-1af2-45fd-a8e1-05cec083bdd7","Type":"ContainerDied","Data":"ad1fd21c691dc675b62fad95a6e7e8ad52ebcb62e20c4eefb0dc3125badfd973"} Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.687894 4842 scope.go:117] "RemoveContainer" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.728485 4842 scope.go:117] "RemoveContainer" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.754003 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0401543d-1af2-45fd-a8e1-05cec083bdd7" (UID: "0401543d-1af2-45fd-a8e1-05cec083bdd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.761843 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcmj\" (UniqueName: \"kubernetes.io/projected/0401543d-1af2-45fd-a8e1-05cec083bdd7-kube-api-access-dtcmj\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.761876 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.761888 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0401543d-1af2-45fd-a8e1-05cec083bdd7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.768870 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.769982 4842 scope.go:117] "RemoveContainer" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.805795 4842 scope.go:117] "RemoveContainer" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" Feb 02 06:49:41 crc kubenswrapper[4842]: E0202 06:49:41.814862 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6\": container with ID starting with 78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6 not found: ID does not exist" containerID="78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.814920 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6"} err="failed to get container status \"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6\": rpc error: code = NotFound desc = could not find container \"78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6\": container with ID starting with 78e9529b82e73aa19433041fe4d23066cbcbc288f5d51f46315d8056d17cf0f6 not found: ID does not exist" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.814955 4842 scope.go:117] "RemoveContainer" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" Feb 02 06:49:41 crc kubenswrapper[4842]: E0202 06:49:41.815442 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd\": container with ID starting with eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd not found: ID does not exist" containerID="eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.815514 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd"} err="failed to get container status \"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd\": rpc error: code = NotFound desc = could not find container \"eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd\": container with ID starting with eaf9d6c021e806051d6b0ac858b58d93cb7766dc6129686409ffda36e557eccd not found: ID does not exist" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.815580 4842 scope.go:117] "RemoveContainer" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" Feb 02 06:49:41 crc kubenswrapper[4842]: E0202 06:49:41.816078 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0\": container with ID starting with 1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0 not found: ID does not exist" containerID="1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0" Feb 02 06:49:41 crc kubenswrapper[4842]: I0202 06:49:41.816163 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0"} err="failed to get container status \"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0\": rpc error: code = NotFound desc = could not find container \"1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0\": container with ID starting with 1b665abd516c92090ff869fab9ed846ef67fb35ff96dbe511b66a77bb2b78db0 not found: ID does not exist" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.036509 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.044207 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9mdpt"] Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.146351 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.146434 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.146497 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.147336 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.147456 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5" gracePeriod=600 Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.696779 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5" exitCode=0 Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.696857 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5"} Feb 02 06:49:42 crc kubenswrapper[4842]: I0202 06:49:42.697115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb"} Feb 02 06:49:43 crc kubenswrapper[4842]: I0202 06:49:43.445836 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" path="/var/lib/kubelet/pods/0401543d-1af2-45fd-a8e1-05cec083bdd7/volumes" Feb 02 06:49:43 crc kubenswrapper[4842]: I0202 06:49:43.487325 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:49:43 crc kubenswrapper[4842]: I0202 06:49:43.487679 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m6ms7" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" containerID="cri-o://f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" gracePeriod=2 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.189693 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.190421 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" containerID="cri-o://460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6" gracePeriod=30 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.211655 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.211970 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" containerID="cri-o://55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7" gracePeriod=30 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.576414 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.704068 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") pod \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.704136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") pod \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.704199 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") pod \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\" (UID: \"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.706040 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities" (OuterVolumeSpecName: "utilities") pod "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" (UID: "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.718712 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq" (OuterVolumeSpecName: "kube-api-access-jwfcq") pod "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" (UID: "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb"). InnerVolumeSpecName "kube-api-access-jwfcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.729798 4842 generic.go:334] "Generic (PLEG): container finished" podID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerID="55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7" exitCode=0 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.729804 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" (UID: "eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.729880 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerDied","Data":"55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.730162 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" event={"ID":"2cd1f864-6b9b-4113-b65e-446049b9af92","Type":"ContainerDied","Data":"0429779ecc8d7f354927858d9f829de9c008478a695454154ec2b53a1da0abb2"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.730276 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0429779ecc8d7f354927858d9f829de9c008478a695454154ec2b53a1da0abb2" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.731400 4842 generic.go:334] "Generic (PLEG): container finished" podID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerID="460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6" exitCode=0 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.731503 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerDied","Data":"460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.733170 4842 generic.go:334] "Generic (PLEG): container finished" podID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" exitCode=0 Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.733255 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6ms7" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.733283 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.735408 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6ms7" event={"ID":"eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb","Type":"ContainerDied","Data":"d839d2fe1ddee6dc1ee5e5c2514aaebc941a9e75e08e10d40cd5d9caf2627fd2"} Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.735459 4842 scope.go:117] "RemoveContainer" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.751354 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.762635 4842 scope.go:117] "RemoveContainer" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.770898 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.773537 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6ms7"] Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.795013 4842 scope.go:117] "RemoveContainer" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.805496 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.805526 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.805541 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwfcq\" (UniqueName: \"kubernetes.io/projected/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb-kube-api-access-jwfcq\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.826054 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.833108 4842 scope.go:117] "RemoveContainer" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" Feb 02 06:49:44 crc kubenswrapper[4842]: E0202 06:49:44.833566 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489\": container with ID starting with f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489 not found: ID does not exist" containerID="f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.833672 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489"} err="failed to get container status \"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489\": rpc error: code = NotFound desc = could not find container \"f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489\": container with ID starting with f5fe3ff29a99306622ed83546bc7f2e5eae5880c68b19bacf3a85ef4ebbe4489 not found: ID does not exist" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.833767 4842 scope.go:117] "RemoveContainer" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" Feb 02 06:49:44 crc kubenswrapper[4842]: E0202 06:49:44.834154 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca\": container with ID starting with df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca not found: ID does not exist" containerID="df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.834256 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca"} err="failed to get container status \"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca\": rpc error: code = NotFound desc = could not find container \"df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca\": container with ID starting with df039b89a3cc566c5bb891b0ad1811eb0ba3b5b7e84a10777cf32c394169a4ca not found: ID does not exist" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.834354 4842 scope.go:117] "RemoveContainer" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" Feb 02 06:49:44 crc kubenswrapper[4842]: E0202 06:49:44.834914 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d\": container with ID starting with d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d not found: ID does not exist" containerID="d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.834997 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d"} err="failed to get container status \"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d\": rpc error: code = NotFound desc = could not find container \"d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d\": container with ID starting with d79b8cf4d7bb1113fe8f1b4ee67187f662ef997ced43c01af79821854dc7c65d not found: ID does not exist" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.906927 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.906977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.907007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.907035 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") pod \"2cd1f864-6b9b-4113-b65e-446049b9af92\" (UID: \"2cd1f864-6b9b-4113-b65e-446049b9af92\") " Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.907850 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config" (OuterVolumeSpecName: "config") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.908162 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.910481 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:44 crc kubenswrapper[4842]: I0202 06:49:44.910497 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw" (OuterVolumeSpecName: "kube-api-access-jrbqw") pod "2cd1f864-6b9b-4113-b65e-446049b9af92" (UID: "2cd1f864-6b9b-4113-b65e-446049b9af92"). InnerVolumeSpecName "kube-api-access-jrbqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.007936 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008193 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008288 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008332 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008392 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") pod \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\" (UID: \"0b226528-cbee-4e1b-a63a-2e9cb152a9a5\") " Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008818 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cd1f864-6b9b-4113-b65e-446049b9af92-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008878 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008908 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrbqw\" (UniqueName: \"kubernetes.io/projected/2cd1f864-6b9b-4113-b65e-446049b9af92-kube-api-access-jrbqw\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.008935 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cd1f864-6b9b-4113-b65e-446049b9af92-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.009415 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.009628 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.009915 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config" (OuterVolumeSpecName: "config") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.013877 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.014555 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f" (OuterVolumeSpecName: "kube-api-access-4gr2f") pod "0b226528-cbee-4e1b-a63a-2e9cb152a9a5" (UID: "0b226528-cbee-4e1b-a63a-2e9cb152a9a5"). InnerVolumeSpecName "kube-api-access-4gr2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.109973 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110023 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110038 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110051 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gr2f\" (UniqueName: \"kubernetes.io/projected/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-kube-api-access-4gr2f\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.110063 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b226528-cbee-4e1b-a63a-2e9cb152a9a5-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.451793 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" path="/var/lib/kubelet/pods/eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb/volumes" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.695900 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696775 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696805 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696826 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696840 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696865 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696878 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696892 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696908 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696928 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696940 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696961 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.696975 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-content" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.696998 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.697011 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: E0202 06:49:45.697031 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.697045 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="extract-utilities" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698719 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" containerName="route-controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698765 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0401543d-1af2-45fd-a8e1-05cec083bdd7" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698823 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" containerName="controller-manager" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.698842 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac4eef9-e834-4200-a3a6-5cc1e5a9a2cb" containerName="registry-server" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.701293 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.714548 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.717322 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.725580 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.729568 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.751756 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.754455 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.754821 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-99f997678-95hv6" event={"ID":"0b226528-cbee-4e1b-a63a-2e9cb152a9a5","Type":"ContainerDied","Data":"ff5feb05e1f6a299dda4671dfa6361e0b820e5dc062a808b595cb6a3638ecd2f"} Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.754856 4842 scope.go:117] "RemoveContainer" containerID="460312f0fdda5f4c6106f8723d73d45f294eafbd8190af71f258393d8fc703a6" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.794176 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.799297 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-68654ddbd-nd2df"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.806385 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.814872 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-99f997678-95hv6"] Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819168 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819206 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819268 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819371 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819396 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819420 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819548 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819601 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.819657 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.921061 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.921375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922449 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922473 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.922501 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.923134 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.923909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.924419 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.925584 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.926696 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.928151 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.932659 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.949234 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"route-controller-manager-f865c6b84-bslhd\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:45 crc kubenswrapper[4842]: I0202 06:49:45.950874 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"controller-manager-577b8789bf-xqfmj\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.081718 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.085075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.402452 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:49:46 crc kubenswrapper[4842]: W0202 06:49:46.415851 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81aa66cb_52e6_47c7_a265_f441c27469ab.slice/crio-9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847 WatchSource:0}: Error finding container 9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847: Status 404 returned error can't find the container with id 9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847 Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.566105 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:49:46 crc kubenswrapper[4842]: W0202 06:49:46.574341 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e4df66_5150_49ad_8fe1_a4c7cd09bb97.slice/crio-009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8 WatchSource:0}: Error finding container 009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8: Status 404 returned error can't find the container with id 009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8 Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.758515 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerStarted","Data":"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.758823 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.758834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerStarted","Data":"009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.761691 4842 patch_prober.go:28] interesting pod/controller-manager-577b8789bf-xqfmj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.761727 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.762425 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerStarted","Data":"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.762473 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerStarted","Data":"9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847"} Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.762816 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.784689 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" podStartSLOduration=2.784670754 podStartE2EDuration="2.784670754s" podCreationTimestamp="2026-02-02 06:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:46.783027964 +0000 UTC m=+212.160295936" watchObservedRunningTime="2026-02-02 06:49:46.784670754 +0000 UTC m=+212.161938666" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.806880 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" podStartSLOduration=2.806854364 podStartE2EDuration="2.806854364s" podCreationTimestamp="2026-02-02 06:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:49:46.80547096 +0000 UTC m=+212.182738912" watchObservedRunningTime="2026-02-02 06:49:46.806854364 +0000 UTC m=+212.184122306" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.847306 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:49:46 crc kubenswrapper[4842]: I0202 06:49:46.861319 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.457803 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b226528-cbee-4e1b-a63a-2e9cb152a9a5" path="/var/lib/kubelet/pods/0b226528-cbee-4e1b-a63a-2e9cb152a9a5/volumes" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.459389 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd1f864-6b9b-4113-b65e-446049b9af92" path="/var/lib/kubelet/pods/2cd1f864-6b9b-4113-b65e-446049b9af92/volumes" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.483053 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.557077 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:47 crc kubenswrapper[4842]: I0202 06:49:47.784094 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.288999 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.289451 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l9qkz" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" containerID="cri-o://2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" gracePeriod=2 Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.771970 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.797907 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" exitCode=0 Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.797974 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l9qkz" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.797991 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926"} Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.798074 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l9qkz" event={"ID":"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb","Type":"ContainerDied","Data":"5f20b78ac1d8de395289985ed057496cf0e32696d0cdab93b3ce9b9bfd17fab2"} Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.798136 4842 scope.go:117] "RemoveContainer" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.824395 4842 scope.go:117] "RemoveContainer" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.845437 4842 scope.go:117] "RemoveContainer" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.870971 4842 scope.go:117] "RemoveContainer" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" Feb 02 06:49:49 crc kubenswrapper[4842]: E0202 06:49:49.871430 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926\": container with ID starting with 2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926 not found: ID does not exist" containerID="2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871463 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926"} err="failed to get container status \"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926\": rpc error: code = NotFound desc = could not find container \"2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926\": container with ID starting with 2bf2c11f1ca39125eb285b3c434e4d99866c2230b07228184367c7c4ce810926 not found: ID does not exist" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871493 4842 scope.go:117] "RemoveContainer" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" Feb 02 06:49:49 crc kubenswrapper[4842]: E0202 06:49:49.871807 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568\": container with ID starting with 26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568 not found: ID does not exist" containerID="26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871832 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568"} err="failed to get container status \"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568\": rpc error: code = NotFound desc = could not find container \"26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568\": container with ID starting with 26bc39f7ea1cc33a68a13fb29a60d43afd7bf35d627c4079450a37e3dff62568 not found: ID does not exist" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.871851 4842 scope.go:117] "RemoveContainer" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" Feb 02 06:49:49 crc kubenswrapper[4842]: E0202 06:49:49.872056 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5\": container with ID starting with 1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5 not found: ID does not exist" containerID="1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.872083 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5"} err="failed to get container status \"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5\": rpc error: code = NotFound desc = could not find container \"1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5\": container with ID starting with 1e25dc3d1edea490e1c8cd3b444d5b88a6502a90bad3cef321e8416ee23978b5 not found: ID does not exist" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.884134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") pod \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.885832 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") pod \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.885886 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") pod \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\" (UID: \"c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb\") " Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.887066 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities" (OuterVolumeSpecName: "utilities") pod "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" (UID: "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.893640 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw" (OuterVolumeSpecName: "kube-api-access-mrqbw") pod "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" (UID: "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb"). InnerVolumeSpecName "kube-api-access-mrqbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.966563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" (UID: "c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.989644 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrqbw\" (UniqueName: \"kubernetes.io/projected/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-kube-api-access-mrqbw\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.989690 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:49 crc kubenswrapper[4842]: I0202 06:49:49.989702 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:50 crc kubenswrapper[4842]: I0202 06:49:50.146096 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:49:50 crc kubenswrapper[4842]: I0202 06:49:50.150013 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l9qkz"] Feb 02 06:49:51 crc kubenswrapper[4842]: I0202 06:49:51.444449 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" path="/var/lib/kubelet/pods/c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb/volumes" Feb 02 06:49:55 crc kubenswrapper[4842]: I0202 06:49:55.714140 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" containerID="cri-o://25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f" gracePeriod=15 Feb 02 06:49:55 crc kubenswrapper[4842]: I0202 06:49:55.859392 4842 generic.go:334] "Generic (PLEG): container finished" podID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerID="25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f" exitCode=0 Feb 02 06:49:55 crc kubenswrapper[4842]: I0202 06:49:55.859462 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerDied","Data":"25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f"} Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.288148 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474064 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474175 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474210 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474295 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.474335 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475087 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475199 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475423 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475490 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475541 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475595 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475649 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") pod \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\" (UID: \"bf91f3e9-19c2-4f18-b129-41aafd1a1264\") " Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476290 4842 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476412 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.475826 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476510 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.476650 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.483761 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.483993 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.484460 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.484622 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw" (OuterVolumeSpecName: "kube-api-access-bmndw") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "kube-api-access-bmndw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.485187 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.485710 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.486027 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.486486 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.491065 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bf91f3e9-19c2-4f18-b129-41aafd1a1264" (UID: "bf91f3e9-19c2-4f18-b129-41aafd1a1264"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577067 4842 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf91f3e9-19c2-4f18-b129-41aafd1a1264-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577136 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577167 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577197 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577292 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577321 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577348 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577376 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577424 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577452 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577477 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmndw\" (UniqueName: \"kubernetes.io/projected/bf91f3e9-19c2-4f18-b129-41aafd1a1264-kube-api-access-bmndw\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577504 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.577531 4842 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bf91f3e9-19c2-4f18-b129-41aafd1a1264-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.874235 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" event={"ID":"bf91f3e9-19c2-4f18-b129-41aafd1a1264","Type":"ContainerDied","Data":"9e442ed8624abf7c7c008be60f767ce4757519be014cdfd4e95fe98d8969b767"} Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.874290 4842 scope.go:117] "RemoveContainer" containerID="25634892eeeb42d0ef66d036ba3180352e61cb89dc73ca05e000cddfc7ed5d5f" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.874411 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hj5sv" Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.917431 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:49:56 crc kubenswrapper[4842]: I0202 06:49:56.920565 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hj5sv"] Feb 02 06:49:57 crc kubenswrapper[4842]: I0202 06:49:57.445472 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" path="/var/lib/kubelet/pods/bf91f3e9-19c2-4f18-b129-41aafd1a1264/volumes" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.704650 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c494796b-xmpl7"] Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705543 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-utilities" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705565 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-utilities" Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705593 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-content" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705607 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="extract-content" Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705626 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705638 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" Feb 02 06:49:59 crc kubenswrapper[4842]: E0202 06:49:59.705673 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705689 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705897 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b2c621-4f86-4e6b-a1ec-02fc1c8113cb" containerName="registry-server" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.705921 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf91f3e9-19c2-4f18-b129-41aafd1a1264" containerName="oauth-openshift" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.706572 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.713410 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.713907 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.714265 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.714502 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.715170 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.715600 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.716150 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.716797 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719583 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719662 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-session\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719760 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-service-ca\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719793 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-router-certs\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719834 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719873 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719908 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-error\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719955 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.719989 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720023 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720070 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxj6\" (UniqueName: \"kubernetes.io/projected/a2f32ab9-c38e-4e56-867d-7c1f14d54868-kube-api-access-bdxj6\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720108 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-policies\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720142 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-dir\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.720174 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-login\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.722363 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.722921 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.723255 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.723360 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.734712 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.735066 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.740138 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c494796b-xmpl7"] Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.743435 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821640 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-login\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821731 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821770 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-session\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-service-ca\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-router-certs\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.821997 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822054 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822089 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-error\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822139 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822177 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822210 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822279 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxj6\" (UniqueName: \"kubernetes.io/projected/a2f32ab9-c38e-4e56-867d-7c1f14d54868-kube-api-access-bdxj6\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822317 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-policies\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822352 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-dir\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.822499 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-dir\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.824279 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.824304 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.824568 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-audit-policies\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.825466 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-service-ca\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.830604 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.830712 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.831031 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.831280 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-error\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.832121 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-router-certs\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.832689 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-system-session\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.833875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.834131 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2f32ab9-c38e-4e56-867d-7c1f14d54868-v4-0-config-user-template-login\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:49:59 crc kubenswrapper[4842]: I0202 06:49:59.854825 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxj6\" (UniqueName: \"kubernetes.io/projected/a2f32ab9-c38e-4e56-867d-7c1f14d54868-kube-api-access-bdxj6\") pod \"oauth-openshift-c494796b-xmpl7\" (UID: \"a2f32ab9-c38e-4e56-867d-7c1f14d54868\") " pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:00 crc kubenswrapper[4842]: I0202 06:50:00.041929 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:00 crc kubenswrapper[4842]: I0202 06:50:00.616022 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c494796b-xmpl7"] Feb 02 06:50:00 crc kubenswrapper[4842]: W0202 06:50:00.622209 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f32ab9_c38e_4e56_867d_7c1f14d54868.slice/crio-a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef WatchSource:0}: Error finding container a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef: Status 404 returned error can't find the container with id a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef Feb 02 06:50:00 crc kubenswrapper[4842]: I0202 06:50:00.914359 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" event={"ID":"a2f32ab9-c38e-4e56-867d-7c1f14d54868","Type":"ContainerStarted","Data":"a12ba5c9774a48bdfda0313b0e71b0f61667f7d287fad14d1bed7c668076e7ef"} Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.921755 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" event={"ID":"a2f32ab9-c38e-4e56-867d-7c1f14d54868","Type":"ContainerStarted","Data":"0750c0dde31751ccbcbdb957d880d44f1e29d4b7a9954705a364d7cb82e7dcbb"} Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.922336 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.930008 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" Feb 02 06:50:01 crc kubenswrapper[4842]: I0202 06:50:01.943699 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c494796b-xmpl7" podStartSLOduration=31.9436737 podStartE2EDuration="31.9436737s" podCreationTimestamp="2026-02-02 06:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:01.941894217 +0000 UTC m=+227.319162199" watchObservedRunningTime="2026-02-02 06:50:01.9436737 +0000 UTC m=+227.320941652" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.143799 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.144315 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" containerID="cri-o://768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" gracePeriod=30 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.241274 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.241535 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" containerID="cri-o://c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" gracePeriod=30 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.832331 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.860067 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.897260 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.898583 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config" (OuterVolumeSpecName: "config") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937423 4842 generic.go:334] "Generic (PLEG): container finished" podID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" exitCode=0 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937474 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerDied","Data":"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937496 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" event={"ID":"81aa66cb-52e6-47c7-a265-f441c27469ab","Type":"ContainerDied","Data":"9e72e571d2546b7b55a841837009db7f12ec675858678bd32edb3b3f5e9f3847"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937512 4842 scope.go:117] "RemoveContainer" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.937605 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942065 4842 generic.go:334] "Generic (PLEG): container finished" podID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" exitCode=0 Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942109 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerDied","Data":"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942139 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" event={"ID":"12e4df66-5150-49ad-8fe1-a4c7cd09bb97","Type":"ContainerDied","Data":"009f0767a9c6d25730471d2699cc1667960fae6b41aa164b180b1803f5c237c8"} Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.942192 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b8789bf-xqfmj" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.953941 4842 scope.go:117] "RemoveContainer" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" Feb 02 06:50:04 crc kubenswrapper[4842]: E0202 06:50:04.954499 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba\": container with ID starting with c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba not found: ID does not exist" containerID="c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.954537 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba"} err="failed to get container status \"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba\": rpc error: code = NotFound desc = could not find container \"c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba\": container with ID starting with c16710dc51da216dbe3e32e2e61d1af41762994fc2090d1139fb902be028acba not found: ID does not exist" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.954558 4842 scope.go:117] "RemoveContainer" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.969278 4842 scope.go:117] "RemoveContainer" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" Feb 02 06:50:04 crc kubenswrapper[4842]: E0202 06:50:04.969689 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a\": container with ID starting with 768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a not found: ID does not exist" containerID="768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.969709 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a"} err="failed to get container status \"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a\": rpc error: code = NotFound desc = could not find container \"768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a\": container with ID starting with 768631107ab27a46c91c5b672c3d2cb93e3ebaca049c2f51e26a2fbebfd55d2a not found: ID does not exist" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998243 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998370 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998397 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998417 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998438 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998469 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") pod \"81aa66cb-52e6-47c7-a265-f441c27469ab\" (UID: \"81aa66cb-52e6-47c7-a265-f441c27469ab\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998510 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998529 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") pod \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\" (UID: \"12e4df66-5150-49ad-8fe1-a4c7cd09bb97\") " Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.998697 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.999347 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:04 crc kubenswrapper[4842]: I0202 06:50:04.999958 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca" (OuterVolumeSpecName: "client-ca") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.000396 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca" (OuterVolumeSpecName: "client-ca") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.000764 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config" (OuterVolumeSpecName: "config") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.004170 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc" (OuterVolumeSpecName: "kube-api-access-b58hc") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "kube-api-access-b58hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.004588 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m" (OuterVolumeSpecName: "kube-api-access-xjb8m") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "kube-api-access-xjb8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.005508 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81aa66cb-52e6-47c7-a265-f441c27469ab" (UID: "81aa66cb-52e6-47c7-a265-f441c27469ab"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.005771 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12e4df66-5150-49ad-8fe1-a4c7cd09bb97" (UID: "12e4df66-5150-49ad-8fe1-a4c7cd09bb97"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099834 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjb8m\" (UniqueName: \"kubernetes.io/projected/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-kube-api-access-xjb8m\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099885 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099898 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81aa66cb-52e6-47c7-a265-f441c27469ab-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099908 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b58hc\" (UniqueName: \"kubernetes.io/projected/81aa66cb-52e6-47c7-a265-f441c27469ab-kube-api-access-b58hc\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099917 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099926 4842 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099936 4842 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12e4df66-5150-49ad-8fe1-a4c7cd09bb97-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.099961 4842 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81aa66cb-52e6-47c7-a265-f441c27469ab-client-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.301025 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.304674 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-577b8789bf-xqfmj"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.311173 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.316691 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f865c6b84-bslhd"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.440860 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" path="/var/lib/kubelet/pods/12e4df66-5150-49ad-8fe1-a4c7cd09bb97/volumes" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.441522 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" path="/var/lib/kubelet/pods/81aa66cb-52e6-47c7-a265-f441c27469ab/volumes" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714087 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bdd884f8b-p6pzq"] Feb 02 06:50:05 crc kubenswrapper[4842]: E0202 06:50:05.714341 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714356 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: E0202 06:50:05.714372 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714381 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714495 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e4df66-5150-49ad-8fe1-a4c7cd09bb97" containerName="controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714508 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="81aa66cb-52e6-47c7-a265-f441c27469ab" containerName="route-controller-manager" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.714874 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.718275 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.718534 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.719410 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.719569 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.720052 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.720535 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.723113 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.723950 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.725998 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdd884f8b-p6pzq"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728060 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728078 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728452 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728554 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728625 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728809 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq"] Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728843 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.728906 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.912834 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d04fc416-09dd-4101-b594-09adf0fca345-serving-cert\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-proxy-ca-bundles\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8chlp\" (UniqueName: \"kubernetes.io/projected/d04fc416-09dd-4101-b594-09adf0fca345-kube-api-access-8chlp\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913488 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadf3560-132b-4d19-b532-2cfb01019ca2-serving-cert\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913526 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-config\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-config\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913648 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8h5\" (UniqueName: \"kubernetes.io/projected/dadf3560-132b-4d19-b532-2cfb01019ca2-kube-api-access-zm8h5\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-client-ca\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:05 crc kubenswrapper[4842]: I0202 06:50:05.913714 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-client-ca\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadf3560-132b-4d19-b532-2cfb01019ca2-serving-cert\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015069 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-config\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015092 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-config\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015126 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8h5\" (UniqueName: \"kubernetes.io/projected/dadf3560-132b-4d19-b532-2cfb01019ca2-kube-api-access-zm8h5\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015140 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-client-ca\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015161 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-client-ca\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015181 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d04fc416-09dd-4101-b594-09adf0fca345-serving-cert\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015198 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-proxy-ca-bundles\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.015261 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8chlp\" (UniqueName: \"kubernetes.io/projected/d04fc416-09dd-4101-b594-09adf0fca345-kube-api-access-8chlp\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.016517 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-proxy-ca-bundles\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.017364 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-client-ca\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.017511 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-client-ca\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.017649 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d04fc416-09dd-4101-b594-09adf0fca345-config\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.020374 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dadf3560-132b-4d19-b532-2cfb01019ca2-serving-cert\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.020537 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadf3560-132b-4d19-b532-2cfb01019ca2-config\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.021337 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d04fc416-09dd-4101-b594-09adf0fca345-serving-cert\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.037663 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8chlp\" (UniqueName: \"kubernetes.io/projected/d04fc416-09dd-4101-b594-09adf0fca345-kube-api-access-8chlp\") pod \"controller-manager-bdd884f8b-p6pzq\" (UID: \"d04fc416-09dd-4101-b594-09adf0fca345\") " pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.040533 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8h5\" (UniqueName: \"kubernetes.io/projected/dadf3560-132b-4d19-b532-2cfb01019ca2-kube-api-access-zm8h5\") pod \"route-controller-manager-86d7677bf-bz6nq\" (UID: \"dadf3560-132b-4d19-b532-2cfb01019ca2\") " pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.091719 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.096963 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.548346 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq"] Feb 02 06:50:06 crc kubenswrapper[4842]: W0202 06:50:06.550696 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddadf3560_132b_4d19_b532_2cfb01019ca2.slice/crio-badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4 WatchSource:0}: Error finding container badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4: Status 404 returned error can't find the container with id badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4 Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.639657 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bdd884f8b-p6pzq"] Feb 02 06:50:06 crc kubenswrapper[4842]: W0202 06:50:06.644507 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd04fc416_09dd_4101_b594_09adf0fca345.slice/crio-f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78 WatchSource:0}: Error finding container f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78: Status 404 returned error can't find the container with id f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78 Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.955923 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" event={"ID":"dadf3560-132b-4d19-b532-2cfb01019ca2","Type":"ContainerStarted","Data":"f39efadb06d27e43a6f28be0a797887f10e5c3790fa6867dee0a09ae275ad961"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.955975 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" event={"ID":"dadf3560-132b-4d19-b532-2cfb01019ca2","Type":"ContainerStarted","Data":"badb7447f73fab560bc0a46616f5c5a0d6a83afd50381d73d9078aac5d0d98a4"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.957052 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.959150 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" event={"ID":"d04fc416-09dd-4101-b594-09adf0fca345","Type":"ContainerStarted","Data":"48928bfbbf05285bcb0191927a87bcda75eacbe5fbd97b3c0f47b7d6a51f5079"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.959238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" event={"ID":"d04fc416-09dd-4101-b594-09adf0fca345","Type":"ContainerStarted","Data":"f73107010025ad47264ffcaa5886d0f57784a519a5775a4c96b98c90644f7b78"} Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.959471 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.965628 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.972491 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" podStartSLOduration=2.972480846 podStartE2EDuration="2.972480846s" podCreationTimestamp="2026-02-02 06:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:06.96978041 +0000 UTC m=+232.347048322" watchObservedRunningTime="2026-02-02 06:50:06.972480846 +0000 UTC m=+232.349748758" Feb 02 06:50:06 crc kubenswrapper[4842]: I0202 06:50:06.989250 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bdd884f8b-p6pzq" podStartSLOduration=2.989231453 podStartE2EDuration="2.989231453s" podCreationTimestamp="2026-02-02 06:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:06.988722661 +0000 UTC m=+232.365990583" watchObservedRunningTime="2026-02-02 06:50:06.989231453 +0000 UTC m=+232.366499375" Feb 02 06:50:07 crc kubenswrapper[4842]: I0202 06:50:07.423405 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-86d7677bf-bz6nq" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.771958 4842 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774728 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774817 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774793 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774845 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.774942 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" gracePeriod=15 Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.775837 4842 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776184 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776207 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776250 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776264 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776287 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776302 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776327 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776338 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776357 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776369 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776385 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776397 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.776412 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776424 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776596 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776618 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776636 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776653 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776675 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.776992 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.781320 4842 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.783046 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.788807 4842 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 02 06:50:16 crc kubenswrapper[4842]: E0202 06:50:16.851680 4842 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902051 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902426 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902483 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902601 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:16 crc kubenswrapper[4842]: I0202 06:50:16.902663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.003993 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004058 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004104 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004153 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004172 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004243 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004257 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004332 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004296 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004353 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004306 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004417 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004463 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004500 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.004463 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.021366 4842 generic.go:334] "Generic (PLEG): container finished" podID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerID="240ef4d9719e0e125f80aaba75a288ed11f634bda46b01e82f75011b4bb97529" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.021479 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerDied","Data":"240ef4d9719e0e125f80aaba75a288ed11f634bda46b01e82f75011b4bb97529"} Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.024494 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.025514 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.027105 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028022 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028055 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028072 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" exitCode=0 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028086 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" exitCode=2 Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.028134 4842 scope.go:117] "RemoveContainer" containerID="628bf15b9bc2054996ba1bf571ea68da76c268a27d5f83421750889d3c6c4169" Feb 02 06:50:17 crc kubenswrapper[4842]: I0202 06:50:17.152937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:17 crc kubenswrapper[4842]: W0202 06:50:17.172284 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c WatchSource:0}: Error finding container 5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c: Status 404 returned error can't find the container with id 5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c Feb 02 06:50:17 crc kubenswrapper[4842]: E0202 06:50:17.179398 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905b47b9f6be2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,LastTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.037983 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106"} Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.038462 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5e505f943ae8934267cdf62782f888bd3e63f4f4294207bc4cff73ed3325628c"} Feb 02 06:50:18 crc kubenswrapper[4842]: E0202 06:50:18.039411 4842 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.039544 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.042613 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.542576 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.543777 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.626879 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") pod \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.626933 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") pod \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.626967 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea82b6bc-5c1e-496e-8501-45fdb7220cbb" (UID: "ea82b6bc-5c1e-496e-8501-45fdb7220cbb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627043 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") pod \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\" (UID: \"ea82b6bc-5c1e-496e-8501-45fdb7220cbb\") " Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627143 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock" (OuterVolumeSpecName: "var-lock") pod "ea82b6bc-5c1e-496e-8501-45fdb7220cbb" (UID: "ea82b6bc-5c1e-496e-8501-45fdb7220cbb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627307 4842 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.627322 4842 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.634486 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea82b6bc-5c1e-496e-8501-45fdb7220cbb" (UID: "ea82b6bc-5c1e-496e-8501-45fdb7220cbb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:50:18 crc kubenswrapper[4842]: I0202 06:50:18.729128 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea82b6bc-5c1e-496e-8501-45fdb7220cbb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.051476 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ea82b6bc-5c1e-496e-8501-45fdb7220cbb","Type":"ContainerDied","Data":"0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200"} Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.051745 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0552a9b96b9d22768298700a35eacdb617d371443cdcdb1aba68d660647a3200" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.051595 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.157819 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.163257 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.164444 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.165103 4842 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.165690 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251722 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251879 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251907 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.251939 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252005 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252088 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252296 4842 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252318 4842 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.252335 4842 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:50:19 crc kubenswrapper[4842]: I0202 06:50:19.446782 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 02 06:50:19 crc kubenswrapper[4842]: E0202 06:50:19.767759 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905b47b9f6be2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,LastTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.061181 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062037 4842 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" exitCode=0 Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062179 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062238 4842 scope.go:117] "RemoveContainer" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.062870 4842 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.063267 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.067063 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.067378 4842 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.088506 4842 scope.go:117] "RemoveContainer" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.110953 4842 scope.go:117] "RemoveContainer" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.127034 4842 scope.go:117] "RemoveContainer" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.146053 4842 scope.go:117] "RemoveContainer" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.162437 4842 scope.go:117] "RemoveContainer" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.195149 4842 scope.go:117] "RemoveContainer" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.195714 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\": container with ID starting with a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7 not found: ID does not exist" containerID="a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.195786 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7"} err="failed to get container status \"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\": rpc error: code = NotFound desc = could not find container \"a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7\": container with ID starting with a589273bf292608d88f8748a34b82bfdc81ca30cd2d187292be98bc3107509c7 not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.195826 4842 scope.go:117] "RemoveContainer" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.196125 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\": container with ID starting with d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518 not found: ID does not exist" containerID="d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196162 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518"} err="failed to get container status \"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\": rpc error: code = NotFound desc = could not find container \"d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518\": container with ID starting with d5c0833e30d3ee3b87d79e631011ce09b33799c37d79a246f7aec4856c885518 not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196180 4842 scope.go:117] "RemoveContainer" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.196485 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\": container with ID starting with 9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5 not found: ID does not exist" containerID="9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196528 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5"} err="failed to get container status \"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\": rpc error: code = NotFound desc = could not find container \"9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5\": container with ID starting with 9884f59cfeef4bed5b8195b1d9d4932ab89641efae7d954ea87d2031a7ff88f5 not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196564 4842 scope.go:117] "RemoveContainer" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.196841 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\": container with ID starting with 231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee not found: ID does not exist" containerID="231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196876 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee"} err="failed to get container status \"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\": rpc error: code = NotFound desc = could not find container \"231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee\": container with ID starting with 231ccdd094721052a86c2e4d3493939a817467a77598c188dc7b66c4bec2e0ee not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.196895 4842 scope.go:117] "RemoveContainer" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.197076 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\": container with ID starting with 7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe not found: ID does not exist" containerID="7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.197102 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe"} err="failed to get container status \"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\": rpc error: code = NotFound desc = could not find container \"7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe\": container with ID starting with 7d6d52911d30235c7d065de7e44d7842b5e4bf387e513df2c6ab9d2865662cbe not found: ID does not exist" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.197118 4842 scope.go:117] "RemoveContainer" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" Feb 02 06:50:20 crc kubenswrapper[4842]: E0202 06:50:20.197567 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\": container with ID starting with 3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45 not found: ID does not exist" containerID="3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45" Feb 02 06:50:20 crc kubenswrapper[4842]: I0202 06:50:20.197702 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45"} err="failed to get container status \"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\": rpc error: code = NotFound desc = could not find container \"3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45\": container with ID starting with 3a41e8e2fd5ce46bd2cc87eccb9d321661c276f4e397c8df3368a1d0cc0eab45 not found: ID does not exist" Feb 02 06:50:25 crc kubenswrapper[4842]: I0202 06:50:25.446829 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.189993 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.190496 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.190940 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.191426 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.192065 4842 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:26 crc kubenswrapper[4842]: I0202 06:50:26.192116 4842 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.192625 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="200ms" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.393373 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="400ms" Feb 02 06:50:26 crc kubenswrapper[4842]: E0202 06:50:26.794619 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="800ms" Feb 02 06:50:27 crc kubenswrapper[4842]: E0202 06:50:27.596126 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="1.6s" Feb 02 06:50:29 crc kubenswrapper[4842]: E0202 06:50:29.198404 4842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.169:6443: connect: connection refused" interval="3.2s" Feb 02 06:50:29 crc kubenswrapper[4842]: E0202 06:50:29.770272 4842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.169:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18905b47b9f6be2f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,LastTimestamp:2026-02-02 06:50:17.177366063 +0000 UTC m=+242.554634005,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.135860 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.136317 4842 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf" exitCode=1 Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.136447 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf"} Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.137421 4842 scope.go:117] "RemoveContainer" containerID="2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.137755 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.138737 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.433297 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.434809 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.435405 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.448948 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.448981 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:30 crc kubenswrapper[4842]: E0202 06:50:30.449404 4842 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:30 crc kubenswrapper[4842]: I0202 06:50:30.449986 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:30 crc kubenswrapper[4842]: W0202 06:50:30.490988 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63 WatchSource:0}: Error finding container 1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63: Status 404 returned error can't find the container with id 1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63 Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148006 4842 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="e239dbed7987b10b04ec8caef7e2da3b79cf6b6d24948f7583a18830832c0b2b" exitCode=0 Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148152 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"e239dbed7987b10b04ec8caef7e2da3b79cf6b6d24948f7583a18830832c0b2b"} Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148418 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1e856f5f2a45e09ccbd846b496bf0bb33f663882b3d6bb00a5ebe1f412d8ee63"} Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.148986 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.149040 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.149533 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: E0202 06:50:31.149849 4842 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.150142 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.153814 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.153904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60"} Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.155039 4842 status_manager.go:851] "Failed to get status for pod" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.155754 4842 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.169:6443: connect: connection refused" Feb 02 06:50:31 crc kubenswrapper[4842]: I0202 06:50:31.899043 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:50:32 crc kubenswrapper[4842]: I0202 06:50:32.164366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e30ccc92010a71c941ffa3971080c2714655e55cab7a71a1f0418834a654b59d"} Feb 02 06:50:32 crc kubenswrapper[4842]: I0202 06:50:32.164408 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a1cad6774cf3511d926447185654d22ba47ce37238d6fec0196a476ad1a4cb2"} Feb 02 06:50:32 crc kubenswrapper[4842]: I0202 06:50:32.164419 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2674c3e849babe1ce160765c5bf41b34ed73314d3d4518a4221eb22d72e68d4b"} Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.171791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3648926f4df743e137834a490d45f1c3ce203d74c3fec461b83175cf38ade3ad"} Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172043 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"51668a4b98d59c5beba508371586ec94d9bb4c2af695ce0219bf0d93e8844af4"} Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172274 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172286 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:33 crc kubenswrapper[4842]: I0202 06:50:33.172281 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.452127 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.452419 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.463986 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.604522 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.604957 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 06:50:35 crc kubenswrapper[4842]: I0202 06:50:35.605130 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 06:50:38 crc kubenswrapper[4842]: I0202 06:50:38.187987 4842 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:38 crc kubenswrapper[4842]: I0202 06:50:38.266977 4842 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1fc8a1ea-35cd-4572-a73e-62404385c296" Feb 02 06:50:39 crc kubenswrapper[4842]: I0202 06:50:39.206918 4842 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:39 crc kubenswrapper[4842]: I0202 06:50:39.207390 4842 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a52fecd8-6250-4bb6-bd2d-5f882a228ccd" Feb 02 06:50:39 crc kubenswrapper[4842]: I0202 06:50:39.210047 4842 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1fc8a1ea-35cd-4572-a73e-62404385c296" Feb 02 06:50:45 crc kubenswrapper[4842]: I0202 06:50:45.605162 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 06:50:45 crc kubenswrapper[4842]: I0202 06:50:45.606863 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 06:50:48 crc kubenswrapper[4842]: I0202 06:50:48.693776 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 02 06:50:48 crc kubenswrapper[4842]: I0202 06:50:48.702574 4842 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.045766 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.269822 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.645520 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.714748 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.801863 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 06:50:49 crc kubenswrapper[4842]: I0202 06:50:49.848548 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.214902 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.344024 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.396377 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.413566 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.815564 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.911182 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 02 06:50:50 crc kubenswrapper[4842]: I0202 06:50:50.988737 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.114982 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.347567 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.479155 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.826595 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.920078 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.920518 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.920278 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.922984 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.923204 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.923374 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.923610 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.924565 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.929111 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.932505 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 02 06:50:51 crc kubenswrapper[4842]: I0202 06:50:51.951507 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.033278 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.070708 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.121030 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.135076 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.159987 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.203457 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.336024 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.442556 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.468205 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.569601 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.878601 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 02 06:50:52 crc kubenswrapper[4842]: I0202 06:50:52.951148 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.000507 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.025647 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.034985 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.035170 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.060335 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.078056 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.126992 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.214656 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.266031 4842 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.305484 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.365325 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.482660 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.521509 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.535523 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.577466 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.586530 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.671456 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.747702 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.900515 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.919534 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.959733 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 02 06:50:53 crc kubenswrapper[4842]: I0202 06:50:53.993886 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.018106 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.037302 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.097279 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.194836 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.262158 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.312047 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.412315 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.412680 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.419301 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.433513 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.552946 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.568269 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.579550 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.628765 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.664033 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.697489 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.716564 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.733817 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.742952 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.748121 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.750038 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.763643 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.777889 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.809913 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.818247 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 02 06:50:54 crc kubenswrapper[4842]: I0202 06:50:54.974758 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.018613 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.022324 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.039645 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.065752 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.139121 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.148335 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.252684 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.285747 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.292844 4842 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.304147 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.458901 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.540792 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.604998 4842 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.605092 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.605187 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.606210 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.606505 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60" gracePeriod=30 Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.607664 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.783209 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.838086 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.843405 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.894823 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.925028 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.946190 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.963899 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 02 06:50:55 crc kubenswrapper[4842]: I0202 06:50:55.989137 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.055960 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.071262 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.078363 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.124974 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.259755 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.279989 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.355940 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.409929 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.422265 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.439818 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.535086 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.580828 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.583332 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.605272 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.613673 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.662658 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.700192 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.730288 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.731999 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.759370 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.773271 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.816888 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.890023 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.899682 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.938810 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.956930 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 02 06:50:56 crc kubenswrapper[4842]: I0202 06:50:56.960662 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.058152 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.142299 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.225262 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.373400 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.403525 4842 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.443071 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.481309 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.499867 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.506965 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.609865 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.649510 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.662201 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.728754 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.744702 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.776849 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.787915 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.820060 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.836656 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.839862 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.874249 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 02 06:50:57 crc kubenswrapper[4842]: I0202 06:50:57.998156 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.032811 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.154822 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.172077 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.266749 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.334719 4842 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.341607 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.341673 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.348972 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.349847 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.370821 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.370799425 podStartE2EDuration="20.370799425s" podCreationTimestamp="2026-02-02 06:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:50:58.367002048 +0000 UTC m=+283.744269990" watchObservedRunningTime="2026-02-02 06:50:58.370799425 +0000 UTC m=+283.748067377" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.376863 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.377174 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.470013 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.509503 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.787778 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.810950 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.818056 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.879044 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.940067 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.947951 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 02 06:50:58 crc kubenswrapper[4842]: I0202 06:50:58.974750 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.006052 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.070888 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.152471 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.498332 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.524649 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.533650 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.609767 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.750022 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.802337 4842 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.853803 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.872751 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.885025 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 02 06:50:59 crc kubenswrapper[4842]: I0202 06:50:59.890901 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.052732 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.085829 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.096541 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.115445 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.320197 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.354673 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.475364 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.512814 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.522751 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.539161 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.712914 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.817875 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.887039 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.888714 4842 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.889183 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106" gracePeriod=5 Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.904145 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.927341 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 02 06:51:00 crc kubenswrapper[4842]: I0202 06:51:00.998388 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.028730 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.055893 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.108953 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.149165 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.407802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.441447 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.521550 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.527097 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.535627 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.564743 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.654564 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.856683 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 02 06:51:01 crc kubenswrapper[4842]: I0202 06:51:01.858921 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.020665 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.091441 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.128427 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.157699 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.180207 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.220496 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.328272 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.333103 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.373614 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.390607 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.396388 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.404822 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.417820 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.454450 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.475984 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.573110 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.591638 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.740998 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.800307 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.849057 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 02 06:51:02 crc kubenswrapper[4842]: I0202 06:51:02.887957 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.002173 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.098677 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.170964 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.257037 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.273374 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.493670 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.772677 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 02 06:51:03 crc kubenswrapper[4842]: I0202 06:51:03.821839 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 02 06:51:04 crc kubenswrapper[4842]: I0202 06:51:04.135930 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 02 06:51:04 crc kubenswrapper[4842]: I0202 06:51:04.925776 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 02 06:51:05 crc kubenswrapper[4842]: I0202 06:51:05.514749 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 02 06:51:05 crc kubenswrapper[4842]: I0202 06:51:05.834137 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.013721 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.013789 4842 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106" exitCode=137 Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.498732 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.498849 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.648919 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.648977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649054 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649113 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649129 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649147 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649195 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649206 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.649325 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650454 4842 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650510 4842 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650535 4842 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.650556 4842 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.661149 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:51:06 crc kubenswrapper[4842]: I0202 06:51:06.751878 4842 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.023004 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.023435 4842 scope.go:117] "RemoveContainer" containerID="52658e1427cd8c9c3ef6d07e7765f9b82d90bd1dc21508676eb83936020b6106" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.023521 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.241412 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.443399 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 02 06:51:07 crc kubenswrapper[4842]: I0202 06:51:07.508412 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.277263 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.278615 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-74vp9" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" containerID="cri-o://6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.288896 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.289558 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z5jt7" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" containerID="cri-o://85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.305680 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.305989 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" containerID="cri-o://817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.328739 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.329194 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m2j5m" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" containerID="cri-o://c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.336942 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.337316 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5l5m7" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" containerID="cri-o://d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" gracePeriod=30 Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.807552 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.899471 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.906236 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.909584 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.932900 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.939595 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") pod \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.939632 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") pod \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.939688 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") pod \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\" (UID: \"69e94ec9-2a3b-4f85-a2b7-9e2f07359890\") " Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.940620 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities" (OuterVolumeSpecName: "utilities") pod "69e94ec9-2a3b-4f85-a2b7-9e2f07359890" (UID: "69e94ec9-2a3b-4f85-a2b7-9e2f07359890"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.949534 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f" (OuterVolumeSpecName: "kube-api-access-q662f") pod "69e94ec9-2a3b-4f85-a2b7-9e2f07359890" (UID: "69e94ec9-2a3b-4f85-a2b7-9e2f07359890"). InnerVolumeSpecName "kube-api-access-q662f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:11 crc kubenswrapper[4842]: I0202 06:51:11.996615 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69e94ec9-2a3b-4f85-a2b7-9e2f07359890" (UID: "69e94ec9-2a3b-4f85-a2b7-9e2f07359890"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.040963 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") pod \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041120 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") pod \"de569fea-56ca-4762-9a22-a12561c296b6\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") pod \"671957e9-c40d-416d-8756-a4d7f0abc317\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041406 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") pod \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041537 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") pod \"99088cf9-5dcc-4837-943b-4deca45c1401\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041671 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") pod \"de569fea-56ca-4762-9a22-a12561c296b6\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041771 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") pod \"99088cf9-5dcc-4837-943b-4deca45c1401\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041878 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") pod \"de569fea-56ca-4762-9a22-a12561c296b6\" (UID: \"de569fea-56ca-4762-9a22-a12561c296b6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.041982 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") pod \"671957e9-c40d-416d-8756-a4d7f0abc317\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042082 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") pod \"671957e9-c40d-416d-8756-a4d7f0abc317\" (UID: \"671957e9-c40d-416d-8756-a4d7f0abc317\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042196 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") pod \"99088cf9-5dcc-4837-943b-4deca45c1401\" (UID: \"99088cf9-5dcc-4837-943b-4deca45c1401\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042337 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") pod \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\" (UID: \"c4f753a1-ecf0-4b2c-9121-989677c6b2a6\") " Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042585 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042680 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q662f\" (UniqueName: \"kubernetes.io/projected/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-kube-api-access-q662f\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042781 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69e94ec9-2a3b-4f85-a2b7-9e2f07359890-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042039 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c4f753a1-ecf0-4b2c-9121-989677c6b2a6" (UID: "c4f753a1-ecf0-4b2c-9121-989677c6b2a6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042723 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities" (OuterVolumeSpecName: "utilities") pod "671957e9-c40d-416d-8756-a4d7f0abc317" (UID: "671957e9-c40d-416d-8756-a4d7f0abc317"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.042801 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities" (OuterVolumeSpecName: "utilities") pod "de569fea-56ca-4762-9a22-a12561c296b6" (UID: "de569fea-56ca-4762-9a22-a12561c296b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.044531 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4" (OuterVolumeSpecName: "kube-api-access-8k4r4") pod "de569fea-56ca-4762-9a22-a12561c296b6" (UID: "de569fea-56ca-4762-9a22-a12561c296b6"). InnerVolumeSpecName "kube-api-access-8k4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.045126 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l" (OuterVolumeSpecName: "kube-api-access-p8v2l") pod "671957e9-c40d-416d-8756-a4d7f0abc317" (UID: "671957e9-c40d-416d-8756-a4d7f0abc317"). InnerVolumeSpecName "kube-api-access-p8v2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.045308 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr" (OuterVolumeSpecName: "kube-api-access-pwwsr") pod "c4f753a1-ecf0-4b2c-9121-989677c6b2a6" (UID: "c4f753a1-ecf0-4b2c-9121-989677c6b2a6"). InnerVolumeSpecName "kube-api-access-pwwsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.045304 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities" (OuterVolumeSpecName: "utilities") pod "99088cf9-5dcc-4837-943b-4deca45c1401" (UID: "99088cf9-5dcc-4837-943b-4deca45c1401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.046152 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg" (OuterVolumeSpecName: "kube-api-access-7gfrg") pod "99088cf9-5dcc-4837-943b-4deca45c1401" (UID: "99088cf9-5dcc-4837-943b-4deca45c1401"). InnerVolumeSpecName "kube-api-access-7gfrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.047390 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c4f753a1-ecf0-4b2c-9121-989677c6b2a6" (UID: "c4f753a1-ecf0-4b2c-9121-989677c6b2a6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072368 4842 generic.go:334] "Generic (PLEG): container finished" podID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072466 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072483 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerDied","Data":"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bzsxn" event={"ID":"c4f753a1-ecf0-4b2c-9121-989677c6b2a6","Type":"ContainerDied","Data":"86551bfa40b78ac651aa4bb3b08214372121725e7903350eb4635288d82753ac"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.072543 4842 scope.go:117] "RemoveContainer" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076180 4842 generic.go:334] "Generic (PLEG): container finished" podID="de569fea-56ca-4762-9a22-a12561c296b6" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076270 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m2j5m" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076237 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.076754 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m2j5m" event={"ID":"de569fea-56ca-4762-9a22-a12561c296b6","Type":"ContainerDied","Data":"281d01870ece6a3181561fda9dfe308cdde10657dccb47ecb2c8628297416b48"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080062 4842 generic.go:334] "Generic (PLEG): container finished" podID="671957e9-c40d-416d-8756-a4d7f0abc317" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080202 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080344 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-74vp9" event={"ID":"671957e9-c40d-416d-8756-a4d7f0abc317","Type":"ContainerDied","Data":"e77b162572adbddd868d73ee2b2382cf4886626b5d00d4cbd3b5a5a655acde51"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.080538 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-74vp9" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083561 4842 generic.go:334] "Generic (PLEG): container finished" podID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083697 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083741 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z5jt7" event={"ID":"69e94ec9-2a3b-4f85-a2b7-9e2f07359890","Type":"ContainerDied","Data":"70b3737c860965567c6708a9ff4cb3684a5c902cd3e8826074cbb967adb64bfe"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.083860 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z5jt7" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.086665 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de569fea-56ca-4762-9a22-a12561c296b6" (UID: "de569fea-56ca-4762-9a22-a12561c296b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090241 4842 generic.go:334] "Generic (PLEG): container finished" podID="99088cf9-5dcc-4837-943b-4deca45c1401" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" exitCode=0 Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090371 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l5m7" event={"ID":"99088cf9-5dcc-4837-943b-4deca45c1401","Type":"ContainerDied","Data":"535c1c949c7f7fddcdec8bd932015e6668761ecd24e167f9b71ea785616441c9"} Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.090484 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l5m7" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.104536 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109123 4842 scope.go:117] "RemoveContainer" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109411 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bzsxn"] Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.109838 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a\": container with ID starting with 817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a not found: ID does not exist" containerID="817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109879 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a"} err="failed to get container status \"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a\": rpc error: code = NotFound desc = could not find container \"817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a\": container with ID starting with 817668898fab5e51b3abf3f80425b72d1a70674bf923b8b7745e92d2599cc31a not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.109906 4842 scope.go:117] "RemoveContainer" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.122079 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.126143 4842 scope.go:117] "RemoveContainer" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.129035 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z5jt7"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.134324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "671957e9-c40d-416d-8756-a4d7f0abc317" (UID: "671957e9-c40d-416d-8756-a4d7f0abc317"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.139520 4842 scope.go:117] "RemoveContainer" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143780 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143802 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143813 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwwsr\" (UniqueName: \"kubernetes.io/projected/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-kube-api-access-pwwsr\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143821 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k4r4\" (UniqueName: \"kubernetes.io/projected/de569fea-56ca-4762-9a22-a12561c296b6-kube-api-access-8k4r4\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143830 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8v2l\" (UniqueName: \"kubernetes.io/projected/671957e9-c40d-416d-8756-a4d7f0abc317-kube-api-access-p8v2l\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143838 4842 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c4f753a1-ecf0-4b2c-9121-989677c6b2a6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143846 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143855 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143863 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gfrg\" (UniqueName: \"kubernetes.io/projected/99088cf9-5dcc-4837-943b-4deca45c1401-kube-api-access-7gfrg\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143871 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de569fea-56ca-4762-9a22-a12561c296b6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.143879 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/671957e9-c40d-416d-8756-a4d7f0abc317-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.158543 4842 scope.go:117] "RemoveContainer" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.159022 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025\": container with ID starting with c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025 not found: ID does not exist" containerID="c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159069 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025"} err="failed to get container status \"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025\": rpc error: code = NotFound desc = could not find container \"c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025\": container with ID starting with c1ebf104341f1b64aeb385d1323c7703ec3930f4b05b44743081df564666a025 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159098 4842 scope.go:117] "RemoveContainer" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.159618 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae\": container with ID starting with d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae not found: ID does not exist" containerID="d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159644 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae"} err="failed to get container status \"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae\": rpc error: code = NotFound desc = could not find container \"d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae\": container with ID starting with d76e8f3ff3b70f696577be9bac74169cf5aa0f3b5bca4534248c237af1a174ae not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159656 4842 scope.go:117] "RemoveContainer" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.159952 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c\": container with ID starting with cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c not found: ID does not exist" containerID="cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.159998 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c"} err="failed to get container status \"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c\": rpc error: code = NotFound desc = could not find container \"cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c\": container with ID starting with cf10c220f8e4c7c18d7b3b75f229bca5f01dcb18f6861f8710751c184d04121c not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.160033 4842 scope.go:117] "RemoveContainer" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.175406 4842 scope.go:117] "RemoveContainer" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.195194 4842 scope.go:117] "RemoveContainer" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.205734 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99088cf9-5dcc-4837-943b-4deca45c1401" (UID: "99088cf9-5dcc-4837-943b-4deca45c1401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.208841 4842 scope.go:117] "RemoveContainer" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.209151 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca\": container with ID starting with 6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca not found: ID does not exist" containerID="6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209186 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca"} err="failed to get container status \"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca\": rpc error: code = NotFound desc = could not find container \"6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca\": container with ID starting with 6d298e427c89cc0e226b9524675d73810802c4e0496cc96fde4fe468577994ca not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209251 4842 scope.go:117] "RemoveContainer" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.209595 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551\": container with ID starting with e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551 not found: ID does not exist" containerID="e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209634 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551"} err="failed to get container status \"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551\": rpc error: code = NotFound desc = could not find container \"e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551\": container with ID starting with e91b403fa46440a27510eeae00f55f43951f4cf12111dd68ea6cfd1f20c38551 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.209662 4842 scope.go:117] "RemoveContainer" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.209998 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99\": container with ID starting with 9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99 not found: ID does not exist" containerID="9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.210035 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99"} err="failed to get container status \"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99\": rpc error: code = NotFound desc = could not find container \"9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99\": container with ID starting with 9bcffd62e37a672e39a6787f2c243578a0cd1be1df69a60bcc2f0670e3497e99 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.210056 4842 scope.go:117] "RemoveContainer" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.227693 4842 scope.go:117] "RemoveContainer" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.245519 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99088cf9-5dcc-4837-943b-4deca45c1401-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.250126 4842 scope.go:117] "RemoveContainer" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.266647 4842 scope.go:117] "RemoveContainer" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.267228 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946\": container with ID starting with 85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946 not found: ID does not exist" containerID="85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267263 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946"} err="failed to get container status \"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946\": rpc error: code = NotFound desc = could not find container \"85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946\": container with ID starting with 85f5ced4ee389cf80b2537c6c6be6222dce94b986e1132434f4b542801563946 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267290 4842 scope.go:117] "RemoveContainer" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.267768 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35\": container with ID starting with e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35 not found: ID does not exist" containerID="e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267792 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35"} err="failed to get container status \"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35\": rpc error: code = NotFound desc = could not find container \"e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35\": container with ID starting with e44426c8cdd109cadacef3f6400e5d74ea8d1d653b5ed8dbe5f5917e6c3ffd35 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.267810 4842 scope.go:117] "RemoveContainer" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.268201 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a\": container with ID starting with fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a not found: ID does not exist" containerID="fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.268241 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a"} err="failed to get container status \"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a\": rpc error: code = NotFound desc = could not find container \"fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a\": container with ID starting with fe4e6b5eae92ea98fb26f6084fef88f48ca6a4485abf0bfb20d4e4bb6702033a not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.268257 4842 scope.go:117] "RemoveContainer" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.283233 4842 scope.go:117] "RemoveContainer" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.300693 4842 scope.go:117] "RemoveContainer" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.317508 4842 scope.go:117] "RemoveContainer" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.317845 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a\": container with ID starting with d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a not found: ID does not exist" containerID="d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.317947 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a"} err="failed to get container status \"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a\": rpc error: code = NotFound desc = could not find container \"d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a\": container with ID starting with d50c37c1b7039a80441e89dbdfb8b545c69d2e2508f8a898b31ac557a8166b6a not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.318001 4842 scope.go:117] "RemoveContainer" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.318490 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41\": container with ID starting with 6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41 not found: ID does not exist" containerID="6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.318528 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41"} err="failed to get container status \"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41\": rpc error: code = NotFound desc = could not find container \"6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41\": container with ID starting with 6a2e8fb4961b678938d98e90622e1cbdba67d44fcb1494b89358728417072d41 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.318557 4842 scope.go:117] "RemoveContainer" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" Feb 02 06:51:12 crc kubenswrapper[4842]: E0202 06:51:12.319094 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164\": container with ID starting with 4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164 not found: ID does not exist" containerID="4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.319134 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164"} err="failed to get container status \"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164\": rpc error: code = NotFound desc = could not find container \"4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164\": container with ID starting with 4762ff727f3a29ba6e1e6ee69579ecdb61b217f4f4f61f0b0baff1fd8408e164 not found: ID does not exist" Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.479388 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.485374 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m2j5m"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.489981 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.494721 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-74vp9"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.498280 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:51:12 crc kubenswrapper[4842]: I0202 06:51:12.501958 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5l5m7"] Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.446194 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" path="/var/lib/kubelet/pods/671957e9-c40d-416d-8756-a4d7f0abc317/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.447476 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" path="/var/lib/kubelet/pods/69e94ec9-2a3b-4f85-a2b7-9e2f07359890/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.448598 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" path="/var/lib/kubelet/pods/99088cf9-5dcc-4837-943b-4deca45c1401/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.450725 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" path="/var/lib/kubelet/pods/c4f753a1-ecf0-4b2c-9121-989677c6b2a6/volumes" Feb 02 06:51:13 crc kubenswrapper[4842]: I0202 06:51:13.451613 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de569fea-56ca-4762-9a22-a12561c296b6" path="/var/lib/kubelet/pods/de569fea-56ca-4762-9a22-a12561c296b6/volumes" Feb 02 06:51:15 crc kubenswrapper[4842]: I0202 06:51:15.195310 4842 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.177270 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182121 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182202 4842 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60" exitCode=137 Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182287 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f031981f643f5d87b4def10d3e2db442ecf61d86a5b06ab2a2c7e39a48be9b60"} Feb 02 06:51:26 crc kubenswrapper[4842]: I0202 06:51:26.182341 4842 scope.go:117] "RemoveContainer" containerID="2db37f1a4ef61401bc77b6f9fe89a975ade486c1ae6ffcec9905700d310637cf" Feb 02 06:51:27 crc kubenswrapper[4842]: I0202 06:51:27.192812 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 02 06:51:27 crc kubenswrapper[4842]: I0202 06:51:27.196397 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"78da50ea86651ac25aa1e24a46dbb6da9b002e43bb0d9c6ca3d0e83131eb7c66"} Feb 02 06:51:31 crc kubenswrapper[4842]: I0202 06:51:31.898626 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:35 crc kubenswrapper[4842]: I0202 06:51:35.604406 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:35 crc kubenswrapper[4842]: I0202 06:51:35.610572 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:36 crc kubenswrapper[4842]: I0202 06:51:36.254119 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 02 06:51:42 crc kubenswrapper[4842]: I0202 06:51:42.146526 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:51:42 crc kubenswrapper[4842]: I0202 06:51:42.146835 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.282525 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbb7f"] Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283063 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283078 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283090 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283098 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283112 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283120 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283133 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerName="installer" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283142 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerName="installer" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283153 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283161 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283173 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283180 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283189 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283197 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283207 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283234 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283249 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283259 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283269 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283276 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283287 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283294 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283304 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-content" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283324 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283331 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283340 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283349 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: E0202 06:51:44.283358 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283367 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="extract-utilities" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283474 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="de569fea-56ca-4762-9a22-a12561c296b6" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283485 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f753a1-ecf0-4b2c-9121-989677c6b2a6" containerName="marketplace-operator" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283495 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e94ec9-2a3b-4f85-a2b7-9e2f07359890" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283509 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="671957e9-c40d-416d-8756-a4d7f0abc317" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283526 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283541 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea82b6bc-5c1e-496e-8501-45fdb7220cbb" containerName="installer" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283551 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="99088cf9-5dcc-4837-943b-4deca45c1401" containerName="registry-server" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.283946 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.285525 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.286122 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.287490 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.291982 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbb7f"] Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.293719 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.294763 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.394370 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.394639 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t2zw\" (UniqueName: \"kubernetes.io/projected/57f599bc-2735-4763-8510-fe623d36bd10-kube-api-access-8t2zw\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.394783 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.496274 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t2zw\" (UniqueName: \"kubernetes.io/projected/57f599bc-2735-4763-8510-fe623d36bd10-kube-api-access-8t2zw\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.496532 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.496699 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.498560 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.505866 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/57f599bc-2735-4763-8510-fe623d36bd10-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.522465 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t2zw\" (UniqueName: \"kubernetes.io/projected/57f599bc-2735-4763-8510-fe623d36bd10-kube-api-access-8t2zw\") pod \"marketplace-operator-79b997595-vbb7f\" (UID: \"57f599bc-2735-4763-8510-fe623d36bd10\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:44 crc kubenswrapper[4842]: I0202 06:51:44.606464 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.079901 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbb7f"] Feb 02 06:51:45 crc kubenswrapper[4842]: W0202 06:51:45.088521 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57f599bc_2735_4763_8510_fe623d36bd10.slice/crio-ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7 WatchSource:0}: Error finding container ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7: Status 404 returned error can't find the container with id ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7 Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.312639 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" event={"ID":"57f599bc-2735-4763-8510-fe623d36bd10","Type":"ContainerStarted","Data":"5a028b56f6be560eecf683452dffe8b0b1a412dcbff418e49682824a67abab0c"} Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.312999 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.313011 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" event={"ID":"57f599bc-2735-4763-8510-fe623d36bd10","Type":"ContainerStarted","Data":"ca0356da044adbef390e90e20938fe72bb67a46c8b459ab50af603074356bcf7"} Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.314320 4842 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vbb7f container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" start-of-body= Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.314389 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" podUID="57f599bc-2735-4763-8510-fe623d36bd10" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.66:8080/healthz\": dial tcp 10.217.0.66:8080: connect: connection refused" Feb 02 06:51:45 crc kubenswrapper[4842]: I0202 06:51:45.327436 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" podStartSLOduration=1.327415117 podStartE2EDuration="1.327415117s" podCreationTimestamp="2026-02-02 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:51:45.32592974 +0000 UTC m=+330.703197672" watchObservedRunningTime="2026-02-02 06:51:45.327415117 +0000 UTC m=+330.704683039" Feb 02 06:51:46 crc kubenswrapper[4842]: I0202 06:51:46.322413 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vbb7f" Feb 02 06:52:12 crc kubenswrapper[4842]: I0202 06:52:12.146405 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:52:12 crc kubenswrapper[4842]: I0202 06:52:12.147123 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.905388 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sw8ll"] Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.908468 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.913146 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.928897 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw8ll"] Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.943959 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-utilities\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.944059 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-catalog-content\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:36 crc kubenswrapper[4842]: I0202 06:52:36.944132 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2kkx\" (UniqueName: \"kubernetes.io/projected/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-kube-api-access-f2kkx\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.045638 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-utilities\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.045728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-catalog-content\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.045819 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2kkx\" (UniqueName: \"kubernetes.io/projected/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-kube-api-access-f2kkx\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.046678 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-catalog-content\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.046929 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-utilities\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.079585 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l6tg7"] Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.081320 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.084321 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.095815 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2kkx\" (UniqueName: \"kubernetes.io/projected/7ea1df1c-0a15-44a8-9bb6-9f4513c3b482-kube-api-access-f2kkx\") pod \"redhat-marketplace-sw8ll\" (UID: \"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482\") " pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.108132 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6tg7"] Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.147263 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-utilities\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.147794 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-catalog-content\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.148013 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm5ph\" (UniqueName: \"kubernetes.io/projected/23620448-86fc-4fa7-9295-d9ce6de9b8e6-kube-api-access-xm5ph\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.245161 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249379 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm5ph\" (UniqueName: \"kubernetes.io/projected/23620448-86fc-4fa7-9295-d9ce6de9b8e6-kube-api-access-xm5ph\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249421 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-utilities\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249447 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-catalog-content\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.249983 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-catalog-content\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.250268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23620448-86fc-4fa7-9295-d9ce6de9b8e6-utilities\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.282521 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm5ph\" (UniqueName: \"kubernetes.io/projected/23620448-86fc-4fa7-9295-d9ce6de9b8e6-kube-api-access-xm5ph\") pod \"redhat-operators-l6tg7\" (UID: \"23620448-86fc-4fa7-9295-d9ce6de9b8e6\") " pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.431036 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.748318 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sw8ll"] Feb 02 06:52:37 crc kubenswrapper[4842]: I0202 06:52:37.820543 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l6tg7"] Feb 02 06:52:37 crc kubenswrapper[4842]: W0202 06:52:37.828725 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23620448_86fc_4fa7_9295_d9ce6de9b8e6.slice/crio-95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf WatchSource:0}: Error finding container 95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf: Status 404 returned error can't find the container with id 95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.672450 4842 generic.go:334] "Generic (PLEG): container finished" podID="7ea1df1c-0a15-44a8-9bb6-9f4513c3b482" containerID="202eb0ed13787963a10bf55283d9c4e45e11b412b59ad5ec22d40c596f942361" exitCode=0 Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.672601 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerDied","Data":"202eb0ed13787963a10bf55283d9c4e45e11b412b59ad5ec22d40c596f942361"} Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.672677 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerStarted","Data":"fa6fd8a0c06a34b0ca6d89f9c1b466f8f03ab7ac5f66075a426e545a2a8336a1"} Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.677609 4842 generic.go:334] "Generic (PLEG): container finished" podID="23620448-86fc-4fa7-9295-d9ce6de9b8e6" containerID="463ca7b2922b0b1e47b5a4f43563c0d021fbbe0f59f263c3790edf02314dc179" exitCode=0 Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.677672 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerDied","Data":"463ca7b2922b0b1e47b5a4f43563c0d021fbbe0f59f263c3790edf02314dc179"} Feb 02 06:52:38 crc kubenswrapper[4842]: I0202 06:52:38.677694 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerStarted","Data":"95ca99b21910606d2c47650eca9e96c16490efb370f37a1468d70d2d95cf5ebf"} Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.277977 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.280838 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.284889 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.285056 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.285177 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.285254 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.301607 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.385878 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.385934 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.385992 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.386531 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.387113 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.420146 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"certified-operators-cbwzh\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.486414 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.488383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.492073 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.499161 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.587813 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.587854 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.587892 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.618966 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.686283 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerStarted","Data":"9727ff0e3a5e00814bb179b8ed20d49caf1473e2400b6a7045c76b5ee6d4faf7"} Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.688477 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.688630 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.688673 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.689591 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.690060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.690265 4842 generic.go:334] "Generic (PLEG): container finished" podID="7ea1df1c-0a15-44a8-9bb6-9f4513c3b482" containerID="0011302c13329c7c74cf16d15cd5f5d4701095d6cd3bafecc836bf320d978a43" exitCode=0 Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.690309 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerDied","Data":"0011302c13329c7c74cf16d15cd5f5d4701095d6cd3bafecc836bf320d978a43"} Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.728902 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"community-operators-7hg8l\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.822903 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:39 crc kubenswrapper[4842]: I0202 06:52:39.861428 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 06:52:39 crc kubenswrapper[4842]: W0202 06:52:39.869300 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9969706e_304c_490a_b15d_7d0bfc99261c.slice/crio-87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997 WatchSource:0}: Error finding container 87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997: Status 404 returned error can't find the container with id 87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.057791 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 06:52:40 crc kubenswrapper[4842]: W0202 06:52:40.086862 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d21de2_d86f_4434_a132_ac1e81b63377.slice/crio-2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a WatchSource:0}: Error finding container 2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a: Status 404 returned error can't find the container with id 2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.697622 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sw8ll" event={"ID":"7ea1df1c-0a15-44a8-9bb6-9f4513c3b482","Type":"ContainerStarted","Data":"54dec166f57b910e181cdf37ff3f59c04c4e26cfb2b9d16cebee45ff070289b6"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.698687 4842 generic.go:334] "Generic (PLEG): container finished" podID="9969706e-304c-490a-b15d-7d0bfc99261c" containerID="cdc5b57eaa471b1df4736cdcd50fb5f9ddf54fbd99f33734d0e692fc9f77a97f" exitCode=0 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.698740 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"cdc5b57eaa471b1df4736cdcd50fb5f9ddf54fbd99f33734d0e692fc9f77a97f"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.698758 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerStarted","Data":"87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.701072 4842 generic.go:334] "Generic (PLEG): container finished" podID="79d21de2-d86f-4434-a132-ac1e81b63377" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" exitCode=0 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.701120 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.701140 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerStarted","Data":"2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.705509 4842 generic.go:334] "Generic (PLEG): container finished" podID="23620448-86fc-4fa7-9295-d9ce6de9b8e6" containerID="9727ff0e3a5e00814bb179b8ed20d49caf1473e2400b6a7045c76b5ee6d4faf7" exitCode=0 Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.705664 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerDied","Data":"9727ff0e3a5e00814bb179b8ed20d49caf1473e2400b6a7045c76b5ee6d4faf7"} Feb 02 06:52:40 crc kubenswrapper[4842]: I0202 06:52:40.724646 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sw8ll" podStartSLOduration=3.314480442 podStartE2EDuration="4.724620025s" podCreationTimestamp="2026-02-02 06:52:36 +0000 UTC" firstStartedPulling="2026-02-02 06:52:38.674645229 +0000 UTC m=+384.051913151" lastFinishedPulling="2026-02-02 06:52:40.084784822 +0000 UTC m=+385.462052734" observedRunningTime="2026-02-02 06:52:40.7207987 +0000 UTC m=+386.098066612" watchObservedRunningTime="2026-02-02 06:52:40.724620025 +0000 UTC m=+386.101887967" Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.713057 4842 generic.go:334] "Generic (PLEG): container finished" podID="79d21de2-d86f-4434-a132-ac1e81b63377" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" exitCode=0 Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.713294 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809"} Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.718449 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l6tg7" event={"ID":"23620448-86fc-4fa7-9295-d9ce6de9b8e6","Type":"ContainerStarted","Data":"105268e6936de62f4c5db8f06e036fa59b9f99d9a1c12f936125be1f6dcb0eaa"} Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.724584 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerStarted","Data":"308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b"} Feb 02 06:52:41 crc kubenswrapper[4842]: I0202 06:52:41.765663 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l6tg7" podStartSLOduration=2.312734849 podStartE2EDuration="4.765644104s" podCreationTimestamp="2026-02-02 06:52:37 +0000 UTC" firstStartedPulling="2026-02-02 06:52:38.679097289 +0000 UTC m=+384.056365211" lastFinishedPulling="2026-02-02 06:52:41.132006514 +0000 UTC m=+386.509274466" observedRunningTime="2026-02-02 06:52:41.762877016 +0000 UTC m=+387.140144948" watchObservedRunningTime="2026-02-02 06:52:41.765644104 +0000 UTC m=+387.142912016" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.146344 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.146439 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.146523 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.147505 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.147655 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb" gracePeriod=600 Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.729987 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb" exitCode=0 Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.730078 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.730338 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.730363 4842 scope.go:117] "RemoveContainer" containerID="b07aadea1d5739c7704fa4cb6b40453e6656632398935ea28b8670896cfb67a5" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.733338 4842 generic.go:334] "Generic (PLEG): container finished" podID="9969706e-304c-490a-b15d-7d0bfc99261c" containerID="308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b" exitCode=0 Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.733526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.737286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerStarted","Data":"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52"} Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.802468 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7hg8l" podStartSLOduration=2.377048381 podStartE2EDuration="3.80243768s" podCreationTimestamp="2026-02-02 06:52:39 +0000 UTC" firstStartedPulling="2026-02-02 06:52:40.702187431 +0000 UTC m=+386.079455343" lastFinishedPulling="2026-02-02 06:52:42.12757671 +0000 UTC m=+387.504844642" observedRunningTime="2026-02-02 06:52:42.796594695 +0000 UTC m=+388.173862647" watchObservedRunningTime="2026-02-02 06:52:42.80243768 +0000 UTC m=+388.179705642" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.975382 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzdms"] Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.975977 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:42 crc kubenswrapper[4842]: I0202 06:52:42.990798 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzdms"] Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127791 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-certificates\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127840 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-tls\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/19ce6df2-ffac-4035-8737-e17bebecbf03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.127899 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-trusted-ca\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128038 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7mx7\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-kube-api-access-w7mx7\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128148 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/19ce6df2-ffac-4035-8737-e17bebecbf03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128193 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-bound-sa-token\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.128292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.160645 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229419 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-certificates\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229485 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-tls\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229523 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/19ce6df2-ffac-4035-8737-e17bebecbf03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229564 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-trusted-ca\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229586 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7mx7\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-kube-api-access-w7mx7\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229646 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/19ce6df2-ffac-4035-8737-e17bebecbf03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.229671 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-bound-sa-token\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.230567 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/19ce6df2-ffac-4035-8737-e17bebecbf03-ca-trust-extracted\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.230715 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-trusted-ca\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.230904 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-certificates\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.240832 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/19ce6df2-ffac-4035-8737-e17bebecbf03-installation-pull-secrets\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.245458 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-registry-tls\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.257482 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-bound-sa-token\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.259818 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7mx7\" (UniqueName: \"kubernetes.io/projected/19ce6df2-ffac-4035-8737-e17bebecbf03-kube-api-access-w7mx7\") pod \"image-registry-66df7c8f76-nzdms\" (UID: \"19ce6df2-ffac-4035-8737-e17bebecbf03\") " pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.293205 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.508112 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-nzdms"] Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.745143 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerStarted","Data":"e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968"} Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.753072 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" event={"ID":"19ce6df2-ffac-4035-8737-e17bebecbf03","Type":"ContainerStarted","Data":"49106198fd0e2923d4960595d5c3f7760e4cb0aa2f4b6d1c7ec4eec257c6e80e"} Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.753101 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" event={"ID":"19ce6df2-ffac-4035-8737-e17bebecbf03","Type":"ContainerStarted","Data":"878bbedeeb19eb69ed9665aa9d457705f19cb2abf881f6c7940046f5bd4b5f98"} Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.753375 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.788819 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" podStartSLOduration=1.788803718 podStartE2EDuration="1.788803718s" podCreationTimestamp="2026-02-02 06:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:52:43.787127367 +0000 UTC m=+389.164395299" watchObservedRunningTime="2026-02-02 06:52:43.788803718 +0000 UTC m=+389.166071630" Feb 02 06:52:43 crc kubenswrapper[4842]: I0202 06:52:43.790103 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cbwzh" podStartSLOduration=2.163152894 podStartE2EDuration="4.79009905s" podCreationTimestamp="2026-02-02 06:52:39 +0000 UTC" firstStartedPulling="2026-02-02 06:52:40.699745481 +0000 UTC m=+386.077013393" lastFinishedPulling="2026-02-02 06:52:43.326691627 +0000 UTC m=+388.703959549" observedRunningTime="2026-02-02 06:52:43.771604143 +0000 UTC m=+389.148872075" watchObservedRunningTime="2026-02-02 06:52:43.79009905 +0000 UTC m=+389.167366962" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.246358 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.247044 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.320731 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.432169 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.432268 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:47 crc kubenswrapper[4842]: I0202 06:52:47.841682 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sw8ll" Feb 02 06:52:48 crc kubenswrapper[4842]: I0202 06:52:48.501111 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l6tg7" podUID="23620448-86fc-4fa7-9295-d9ce6de9b8e6" containerName="registry-server" probeResult="failure" output=< Feb 02 06:52:48 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 06:52:48 crc kubenswrapper[4842]: > Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.620205 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.620306 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.684020 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.823938 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.824023 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.862996 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 06:52:49 crc kubenswrapper[4842]: I0202 06:52:49.913678 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:50 crc kubenswrapper[4842]: I0202 06:52:50.856554 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 06:52:57 crc kubenswrapper[4842]: I0202 06:52:57.470822 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:52:57 crc kubenswrapper[4842]: I0202 06:52:57.520434 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l6tg7" Feb 02 06:53:03 crc kubenswrapper[4842]: I0202 06:53:03.301706 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-nzdms" Feb 02 06:53:03 crc kubenswrapper[4842]: I0202 06:53:03.402814 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.462700 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" containerID="cri-o://c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" gracePeriod=30 Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.863308 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933062 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933184 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933246 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933290 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933549 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933601 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") pod \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\" (UID: \"b76f3bc4-4824-422b-a14a-e7cd193ed30d\") " Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.933975 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.934530 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.943550 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.944469 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr" (OuterVolumeSpecName: "kube-api-access-tjbqr") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "kube-api-access-tjbqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.947500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.949755 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.950116 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:53:28 crc kubenswrapper[4842]: I0202 06:53:28.958388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b76f3bc4-4824-422b-a14a-e7cd193ed30d" (UID: "b76f3bc4-4824-422b-a14a-e7cd193ed30d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035135 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035188 4842 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b76f3bc4-4824-422b-a14a-e7cd193ed30d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035207 4842 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035262 4842 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b76f3bc4-4824-422b-a14a-e7cd193ed30d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035276 4842 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035290 4842 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.035303 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjbqr\" (UniqueName: \"kubernetes.io/projected/b76f3bc4-4824-422b-a14a-e7cd193ed30d-kube-api-access-tjbqr\") on node \"crc\" DevicePath \"\"" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066386 4842 generic.go:334] "Generic (PLEG): container finished" podID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" exitCode=0 Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066468 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerDied","Data":"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4"} Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066486 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066534 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fz9q2" event={"ID":"b76f3bc4-4824-422b-a14a-e7cd193ed30d","Type":"ContainerDied","Data":"abf58a7559b9cdd76c76ebedd2333919bb6bc99060b8c1cfc73575fcdd484652"} Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.066573 4842 scope.go:117] "RemoveContainer" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.101908 4842 scope.go:117] "RemoveContainer" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" Feb 02 06:53:29 crc kubenswrapper[4842]: E0202 06:53:29.103050 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4\": container with ID starting with c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4 not found: ID does not exist" containerID="c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.103102 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4"} err="failed to get container status \"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4\": rpc error: code = NotFound desc = could not find container \"c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4\": container with ID starting with c0f1dc5f34d1f80386e6fdb357944d83aa2b47bec8fd128a2011aa5bc422e3b4 not found: ID does not exist" Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.120798 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.124791 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fz9q2"] Feb 02 06:53:29 crc kubenswrapper[4842]: I0202 06:53:29.445418 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" path="/var/lib/kubelet/pods/b76f3bc4-4824-422b-a14a-e7cd193ed30d/volumes" Feb 02 06:54:42 crc kubenswrapper[4842]: I0202 06:54:42.146047 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:54:42 crc kubenswrapper[4842]: I0202 06:54:42.147059 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:55:12 crc kubenswrapper[4842]: I0202 06:55:12.146094 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:55:12 crc kubenswrapper[4842]: I0202 06:55:12.146802 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.146371 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.148653 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.148847 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.149856 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.150099 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00" gracePeriod=600 Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961416 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00" exitCode=0 Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961495 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00"} Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961909 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62"} Feb 02 06:55:42 crc kubenswrapper[4842]: I0202 06:55:42.961945 4842 scope.go:117] "RemoveContainer" containerID="26f863875b25adddb851bd7939cdd2a355f863cc15cc7b84383d70ddfd11cabb" Feb 02 06:56:15 crc kubenswrapper[4842]: I0202 06:56:15.746882 4842 scope.go:117] "RemoveContainer" containerID="55e75296f0e6047802f588fbbf9926e666199b348dea699c186a87607d8698c7" Feb 02 06:57:42 crc kubenswrapper[4842]: I0202 06:57:42.145913 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:57:42 crc kubenswrapper[4842]: I0202 06:57:42.146650 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:58:12 crc kubenswrapper[4842]: I0202 06:58:12.146605 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:58:12 crc kubenswrapper[4842]: I0202 06:58:12.147441 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.649855 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656316 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" containerID="cri-o://638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656477 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" containerID="cri-o://6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656530 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" containerID="cri-o://64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656519 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" containerID="cri-o://78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656513 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" containerID="cri-o://97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656559 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" containerID="cri-o://159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.656600 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: I0202 06:58:25.709945 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" containerID="cri-o://25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" gracePeriod=30 Feb 02 06:58:25 crc kubenswrapper[4842]: E0202 06:58:25.965902 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-conmon-64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-conmon-6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1e4f7c_d788_428b_bea6_e862234bfc59.slice/crio-conmon-97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d.scope\": RecentStats: unable to find data in memory cache]" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.013588 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.015428 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-acl-logging/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.015848 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-controller/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.016246 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071282 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n2fbb"] Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071701 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071713 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071722 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071728 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071736 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kubecfg-setup" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071743 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kubecfg-setup" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071751 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071758 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071766 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071772 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071780 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071786 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071795 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071802 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071809 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071815 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071826 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071832 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071841 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071847 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071855 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071861 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.071869 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071875 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071956 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-acl-logging" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071966 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="nbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071973 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071979 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071985 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-ovn-metrics" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071992 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovn-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.071999 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="northd" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072007 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072013 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072020 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="kube-rbac-proxy-node" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072030 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76f3bc4-4824-422b-a14a-e7cd193ed30d" containerName="registry" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072038 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="sbdb" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.072118 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072125 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.072134 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072140 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.072293 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerName="ovnkube-controller" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.075856 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178511 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178544 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178578 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178601 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178603 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178600 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178625 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178653 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178685 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178686 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178725 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash" (OuterVolumeSpecName: "host-slash") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178734 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178763 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178795 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178912 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178960 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178970 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.178996 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179039 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179075 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179070 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179111 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179158 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179191 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179315 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179391 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") pod \"3f1e4f7c-d788-428b-bea6-e862234bfc59\" (UID: \"3f1e4f7c-d788-428b-bea6-e862234bfc59\") " Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179403 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179460 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179575 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-log-socket\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179647 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-ovn\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179679 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-systemd-units\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179739 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-netns\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179777 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179787 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-var-lib-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179845 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179884 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log" (OuterVolumeSpecName: "node-log") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179895 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179915 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket" (OuterVolumeSpecName: "log-socket") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179940 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-config\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179960 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.179994 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-script-lib\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180136 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180195 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180270 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-systemd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180312 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-node-log\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180350 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-env-overrides\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180384 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-netd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180451 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-etc-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180481 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180514 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-slash\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180552 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-kubelet\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180580 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv8hm\" (UniqueName: \"kubernetes.io/projected/cd14c13b-bd70-4e1c-9b22-b181fc32f958-kube-api-access-lv8hm\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180615 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-bin\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180712 4842 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180733 4842 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180751 4842 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180768 4842 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180786 4842 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180804 4842 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180823 4842 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-slash\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180839 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180856 4842 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180873 4842 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180890 4842 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180907 4842 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180926 4842 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180943 4842 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180961 4842 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180977 4842 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-node-log\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.180994 4842 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-log-socket\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.186255 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp" (OuterVolumeSpecName: "kube-api-access-qdmbp") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "kube-api-access-qdmbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.186732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.206133 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3f1e4f7c-d788-428b-bea6-e862234bfc59" (UID: "3f1e4f7c-d788-428b-bea6-e862234bfc59"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.238130 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovnkube-controller/3.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.240088 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-acl-logging/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.240806 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-njnbq_3f1e4f7c-d788-428b-bea6-e862234bfc59/ovn-controller/0.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241075 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241101 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241109 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241117 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241124 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241130 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" exitCode=0 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241138 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" exitCode=143 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241146 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f1e4f7c-d788-428b-bea6-e862234bfc59" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" exitCode=143 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241190 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241239 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241257 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241270 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241294 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241305 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241318 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241326 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241333 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241340 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241347 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241353 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241360 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241355 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241377 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241367 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241570 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241593 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241605 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241617 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241629 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241641 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241653 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241665 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241677 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241688 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241705 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241721 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241734 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241747 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241757 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241767 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241778 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241788 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241799 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241809 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241820 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-njnbq" event={"ID":"3f1e4f7c-d788-428b-bea6-e862234bfc59","Type":"ContainerDied","Data":"ad55e0c8d5649109a4ec1a9a3e073a9a325c6f3565638121dd923673a8430c3b"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241854 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241867 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241877 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241888 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241899 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241910 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241922 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241933 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241944 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.241955 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.243841 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/2.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244740 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244802 4842 generic.go:334] "Generic (PLEG): container finished" podID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" containerID="3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631" exitCode=2 Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244842 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerDied","Data":"3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.244881 4842 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d"} Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.245406 4842 scope.go:117] "RemoveContainer" containerID="3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.245701 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gmkx9_openshift-multus(c1fd21cd-ea6a-44a0-b136-f338fc97cf18)\"" pod="openshift-multus/multus-gmkx9" podUID="c1fd21cd-ea6a-44a0-b136-f338fc97cf18" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.256827 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.280324 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282440 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282465 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-systemd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282486 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-node-log\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282508 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-env-overrides\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282523 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-netd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282546 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-etc-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282563 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282580 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-slash\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282596 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv8hm\" (UniqueName: \"kubernetes.io/projected/cd14c13b-bd70-4e1c-9b22-b181fc32f958-kube-api-access-lv8hm\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282611 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-kubelet\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-systemd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282652 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-bin\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282619 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-node-log\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282710 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-log-socket\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282704 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282752 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-slash\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282673 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-log-socket\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282779 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-etc-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282800 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-ovn\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282826 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-systemd-units\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282834 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-kubelet\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-netns\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282886 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-var-lib-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282909 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282929 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-config\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-ovn\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282982 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-script-lib\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282790 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-netd\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282724 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283059 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-run-netns\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283090 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-var-lib-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-systemd-units\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.282853 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-host-cni-bin\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.283013 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cd14c13b-bd70-4e1c-9b22-b181fc32f958-run-openvswitch\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284247 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-config\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284561 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-env-overrides\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284765 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdmbp\" (UniqueName: \"kubernetes.io/projected/3f1e4f7c-d788-428b-bea6-e862234bfc59-kube-api-access-qdmbp\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284834 4842 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f1e4f7c-d788-428b-bea6-e862234bfc59-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.284865 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f1e4f7c-d788-428b-bea6-e862234bfc59-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.287140 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovnkube-script-lib\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.289652 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd14c13b-bd70-4e1c-9b22-b181fc32f958-ovn-node-metrics-cert\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.303905 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.304109 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.309528 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-njnbq"] Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.310569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv8hm\" (UniqueName: \"kubernetes.io/projected/cd14c13b-bd70-4e1c-9b22-b181fc32f958-kube-api-access-lv8hm\") pod \"ovnkube-node-n2fbb\" (UID: \"cd14c13b-bd70-4e1c-9b22-b181fc32f958\") " pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.319278 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.332554 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.350108 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.366174 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.378875 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.398181 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.410540 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.414628 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415019 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415066 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415091 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415332 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415349 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415382 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415610 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415636 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415652 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.415904 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415951 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.415967 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.416183 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416203 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416236 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.416563 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416628 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.416687 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.416983 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417003 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417017 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.417301 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417320 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417332 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.417512 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417556 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417571 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: E0202 06:58:26.417827 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417843 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.417855 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418093 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418126 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418395 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418409 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418852 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.418980 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419500 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419589 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419924 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.419945 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420197 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420247 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420530 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420550 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420811 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.420864 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421135 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421157 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421453 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421507 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.421803 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.422332 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423057 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423082 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423674 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.423722 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424076 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424125 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424494 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424537 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424922 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.424976 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425300 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425343 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425642 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425667 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.425979 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426022 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426468 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426518 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426908 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.426937 4842 scope.go:117] "RemoveContainer" containerID="72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427248 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716"} err="failed to get container status \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": rpc error: code = NotFound desc = could not find container \"72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716\": container with ID starting with 72937ca7af06b32caacbf94c32cefeb2b7ac5fcc0f562bbcdab417ec89e89716 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427286 4842 scope.go:117] "RemoveContainer" containerID="97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427661 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d"} err="failed to get container status \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": rpc error: code = NotFound desc = could not find container \"97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d\": container with ID starting with 97b4d289608ccf886cc9936dba03a2d3fd950a7f4629202bbfb683b68a15b07d not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427690 4842 scope.go:117] "RemoveContainer" containerID="64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.427988 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba"} err="failed to get container status \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": rpc error: code = NotFound desc = could not find container \"64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba\": container with ID starting with 64121799e098c62f6909129606c9a088906c1502a1d72e21c81b049dc6c079ba not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428019 4842 scope.go:117] "RemoveContainer" containerID="6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428383 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004"} err="failed to get container status \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": rpc error: code = NotFound desc = could not find container \"6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004\": container with ID starting with 6cd64066ae48327749e03b83dc53a58696343ccfb5786528504ef16803f8e004 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428410 4842 scope.go:117] "RemoveContainer" containerID="d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428710 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4"} err="failed to get container status \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": rpc error: code = NotFound desc = could not find container \"d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4\": container with ID starting with d176665c5c2481182d5cd641d21f9cb50781291167d3f9008f4cb9e75a3ddab4 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.428762 4842 scope.go:117] "RemoveContainer" containerID="78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429167 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32"} err="failed to get container status \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": rpc error: code = NotFound desc = could not find container \"78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32\": container with ID starting with 78c42d6a01d4f24e407deb5140f3b4a0be2942c7dcf13ccc43335909ba8b4b32 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429195 4842 scope.go:117] "RemoveContainer" containerID="159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429571 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33"} err="failed to get container status \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": rpc error: code = NotFound desc = could not find container \"159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33\": container with ID starting with 159c12a1e3df440131e22c5d288ed9a03f020ae6d55854bd3c127bf1787bef33 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429600 4842 scope.go:117] "RemoveContainer" containerID="638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429890 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5"} err="failed to get container status \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": rpc error: code = NotFound desc = could not find container \"638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5\": container with ID starting with 638e7e5fed1f051aa3a664bd1dcdf1ae708306c8e379242b72d5faf64e6e28e5 not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.429942 4842 scope.go:117] "RemoveContainer" containerID="8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.430247 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe"} err="failed to get container status \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": rpc error: code = NotFound desc = could not find container \"8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe\": container with ID starting with 8c9d89660193009c9a6829660255a42fb1c8c9e94eb02b0c85db45aaca7940fe not found: ID does not exist" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.430276 4842 scope.go:117] "RemoveContainer" containerID="25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2" Feb 02 06:58:26 crc kubenswrapper[4842]: I0202 06:58:26.430547 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2"} err="failed to get container status \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": rpc error: code = NotFound desc = could not find container \"25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2\": container with ID starting with 25a48028d3899dd192a445fcf799123d11e031180a343860caa721a64705e0e2 not found: ID does not exist" Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.256903 4842 generic.go:334] "Generic (PLEG): container finished" podID="cd14c13b-bd70-4e1c-9b22-b181fc32f958" containerID="c773a8e662798b3ea6b5b7e12e5e91862c21ba2f37849b77e63eb9b0e601fc93" exitCode=0 Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.257242 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerDied","Data":"c773a8e662798b3ea6b5b7e12e5e91862c21ba2f37849b77e63eb9b0e601fc93"} Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.257277 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"b2ccbbf96e82939af0ad2939dbd92ab1daed6a0a27472456b13a227a47610578"} Feb 02 06:58:27 crc kubenswrapper[4842]: I0202 06:58:27.446403 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f1e4f7c-d788-428b-bea6-e862234bfc59" path="/var/lib/kubelet/pods/3f1e4f7c-d788-428b-bea6-e862234bfc59/volumes" Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.265772 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"fcc1237e67d31d2a48b1c31a500e518e9ea752835aeaa32be6a318e2a8f64fe8"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266508 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"292f77e1b6a81524ec24767c131d7b36e7b618b95af6997d52309613d974b917"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266531 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"db105e7a453b1010c59b8183ce87644a8c934f06da215abb1acc6fdcb057dc4c"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"730360a7a296c9054b758ed0472b2f5a9b8a1c6e91ee584109142845e2816172"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266564 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"76748dc8f7ea234e2d4887b8c145a00bbd46d074c24a344c9d8386dcdfa75e07"} Feb 02 06:58:28 crc kubenswrapper[4842]: I0202 06:58:28.266579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"cb3364c588faa6b89f9248648ec4d99b1eaf155f60fe973ad1e53b3982551ae8"} Feb 02 06:58:31 crc kubenswrapper[4842]: I0202 06:58:31.295531 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"392d61047affe59f5b83792c432c0d27e89d3a65324ff2350dc1bc801b09d3d0"} Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.310402 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" event={"ID":"cd14c13b-bd70-4e1c-9b22-b181fc32f958","Type":"ContainerStarted","Data":"0a1f22c99e002de5246d42c7817684e4759b0d3b29d6dc85e9de02c0556faa61"} Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.310786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.310812 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.311956 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.344357 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" podStartSLOduration=7.344336872 podStartE2EDuration="7.344336872s" podCreationTimestamp="2026-02-02 06:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:58:33.342805774 +0000 UTC m=+738.720073696" watchObservedRunningTime="2026-02-02 06:58:33.344336872 +0000 UTC m=+738.721604814" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.344657 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:33 crc kubenswrapper[4842]: I0202 06:58:33.353586 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.970609 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-q54vf"] Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.972307 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.976268 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.976598 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.977634 4842 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-9bxn5" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.977891 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Feb 02 06:58:34 crc kubenswrapper[4842]: I0202 06:58:34.981395 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q54vf"] Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.106671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.106880 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.107035 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.208506 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.208591 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.208660 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.209009 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.210413 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.241599 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"crc-storage-crc-q54vf\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: I0202 06:58:35.306833 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342189 4842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342352 4842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342381 4842 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:35 crc kubenswrapper[4842]: E0202 06:58:35.342428 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(4e9a20a6bc189d79b17a08d42502d97158f9b3c455bff46fe72f1e01acbb1591): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-q54vf" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" Feb 02 06:58:36 crc kubenswrapper[4842]: I0202 06:58:36.326505 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: I0202 06:58:36.327641 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368733 4842 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368828 4842 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368863 4842 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:36 crc kubenswrapper[4842]: E0202 06:58:36.368938 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-q54vf_crc-storage(d49ae49a-4fb5-4d9c-894e-6a743cbe9c20)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-q54vf_crc-storage_d49ae49a-4fb5-4d9c-894e-6a743cbe9c20_0(831f10998e29558bb665b7c10e58517e59b53bfa8db13d517b90ad51fcfcc29d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-q54vf" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" Feb 02 06:58:38 crc kubenswrapper[4842]: I0202 06:58:38.434148 4842 scope.go:117] "RemoveContainer" containerID="3b21f8e1a886dde4d1d02d4825a8f34dbf2fb604aa25d226e93ac27f195f2631" Feb 02 06:58:39 crc kubenswrapper[4842]: I0202 06:58:39.348465 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/2.log" Feb 02 06:58:39 crc kubenswrapper[4842]: I0202 06:58:39.349733 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/1.log" Feb 02 06:58:39 crc kubenswrapper[4842]: I0202 06:58:39.349938 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gmkx9" event={"ID":"c1fd21cd-ea6a-44a0-b136-f338fc97cf18","Type":"ContainerStarted","Data":"e4c8473c86d301bda5245277ad649c0655932872ce690973718b44fcdded7794"} Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.146103 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.146183 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.146283 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.147051 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.147143 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62" gracePeriod=600 Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.387318 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62" exitCode=0 Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.387416 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62"} Feb 02 06:58:42 crc kubenswrapper[4842]: I0202 06:58:42.387833 4842 scope.go:117] "RemoveContainer" containerID="5170675f524a0cbf4768ef91dd8be4f2ac17b44f3012bcf35bd18ead443e0d00" Feb 02 06:58:43 crc kubenswrapper[4842]: I0202 06:58:43.398204 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93"} Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.433431 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.434716 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.918187 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-q54vf"] Feb 02 06:58:50 crc kubenswrapper[4842]: W0202 06:58:50.939143 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd49ae49a_4fb5_4d9c_894e_6a743cbe9c20.slice/crio-c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78 WatchSource:0}: Error finding container c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78: Status 404 returned error can't find the container with id c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78 Feb 02 06:58:50 crc kubenswrapper[4842]: I0202 06:58:50.943286 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 06:58:51 crc kubenswrapper[4842]: I0202 06:58:51.460813 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q54vf" event={"ID":"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20","Type":"ContainerStarted","Data":"c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78"} Feb 02 06:58:52 crc kubenswrapper[4842]: I0202 06:58:52.470056 4842 generic.go:334] "Generic (PLEG): container finished" podID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerID="a4b70fd2cb99fa10540e4dadeda4038897a65ae03e5544ded9e1704361291cbe" exitCode=0 Feb 02 06:58:52 crc kubenswrapper[4842]: I0202 06:58:52.470189 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q54vf" event={"ID":"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20","Type":"ContainerDied","Data":"a4b70fd2cb99fa10540e4dadeda4038897a65ae03e5544ded9e1704361291cbe"} Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.768485 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.858072 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") pod \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.859891 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") pod \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.859984 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") pod \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\" (UID: \"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20\") " Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.860469 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" (UID: "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.865892 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z" (OuterVolumeSpecName: "kube-api-access-dqq7z") pod "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" (UID: "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20"). InnerVolumeSpecName "kube-api-access-dqq7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.881707 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" (UID: "d49ae49a-4fb5-4d9c-894e-6a743cbe9c20"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.962414 4842 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-crc-storage\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.962481 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqq7z\" (UniqueName: \"kubernetes.io/projected/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-kube-api-access-dqq7z\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:53 crc kubenswrapper[4842]: I0202 06:58:53.962511 4842 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/d49ae49a-4fb5-4d9c-894e-6a743cbe9c20-node-mnt\") on node \"crc\" DevicePath \"\"" Feb 02 06:58:54 crc kubenswrapper[4842]: I0202 06:58:54.486300 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-q54vf" event={"ID":"d49ae49a-4fb5-4d9c-894e-6a743cbe9c20","Type":"ContainerDied","Data":"c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78"} Feb 02 06:58:54 crc kubenswrapper[4842]: I0202 06:58:54.486357 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31e0eb6fcae043290bc03c7f171c32ecec74bc1379d7d706c362fc7dc6bfe78" Feb 02 06:58:54 crc kubenswrapper[4842]: I0202 06:58:54.486385 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-q54vf" Feb 02 06:58:56 crc kubenswrapper[4842]: I0202 06:58:56.490253 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n2fbb" Feb 02 06:58:57 crc kubenswrapper[4842]: I0202 06:58:57.334974 4842 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.350161 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n"] Feb 02 06:59:01 crc kubenswrapper[4842]: E0202 06:59:01.350836 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerName="storage" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.350857 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerName="storage" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.351023 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49ae49a-4fb5-4d9c-894e-6a743cbe9c20" containerName="storage" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.352100 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.357638 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.365958 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n"] Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.370971 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.371044 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.371115 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473113 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473601 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473691 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.473726 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.474449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.503590 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.668122 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:01 crc kubenswrapper[4842]: I0202 06:59:01.941286 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n"] Feb 02 06:59:01 crc kubenswrapper[4842]: W0202 06:59:01.945606 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e244b75_9c3a_4f20_9bd7_071fb2cc7883.slice/crio-d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c WatchSource:0}: Error finding container d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c: Status 404 returned error can't find the container with id d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c Feb 02 06:59:02 crc kubenswrapper[4842]: I0202 06:59:02.536902 4842 generic.go:334] "Generic (PLEG): container finished" podID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerID="f1b8819b4d1cd17b3c3b1714c1d0379f57ed6dce58950e4412eb686b40f4f5a8" exitCode=0 Feb 02 06:59:02 crc kubenswrapper[4842]: I0202 06:59:02.537710 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"f1b8819b4d1cd17b3c3b1714c1d0379f57ed6dce58950e4412eb686b40f4f5a8"} Feb 02 06:59:02 crc kubenswrapper[4842]: I0202 06:59:02.538859 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerStarted","Data":"d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c"} Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.518440 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.519369 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.541369 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.610776 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.610887 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.611044 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.711951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712020 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712360 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.712449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.736204 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"redhat-operators-8ltqd\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:03 crc kubenswrapper[4842]: I0202 06:59:03.832108 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.034168 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.549732 4842 generic.go:334] "Generic (PLEG): container finished" podID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerID="7524c5a8e7b6861a405892c3cbf5335926049c145ff021686acc4b0f8e96bf08" exitCode=0 Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.549798 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"7524c5a8e7b6861a405892c3cbf5335926049c145ff021686acc4b0f8e96bf08"} Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.552257 4842 generic.go:334] "Generic (PLEG): container finished" podID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" exitCode=0 Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.552321 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f"} Feb 02 06:59:04 crc kubenswrapper[4842]: I0202 06:59:04.552353 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerStarted","Data":"2ff52db295c35e880c88d5b5145e78895e572aab912fdc448aa89426cbd58de9"} Feb 02 06:59:05 crc kubenswrapper[4842]: I0202 06:59:05.573296 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerStarted","Data":"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3"} Feb 02 06:59:05 crc kubenswrapper[4842]: I0202 06:59:05.580941 4842 generic.go:334] "Generic (PLEG): container finished" podID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerID="d6100efc6d95a57f6ea0c8740bd259b211506a1b5192e697b860d7dcd3822564" exitCode=0 Feb 02 06:59:05 crc kubenswrapper[4842]: I0202 06:59:05.580992 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"d6100efc6d95a57f6ea0c8740bd259b211506a1b5192e697b860d7dcd3822564"} Feb 02 06:59:06 crc kubenswrapper[4842]: I0202 06:59:06.593708 4842 generic.go:334] "Generic (PLEG): container finished" podID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" exitCode=0 Feb 02 06:59:06 crc kubenswrapper[4842]: I0202 06:59:06.593809 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3"} Feb 02 06:59:06 crc kubenswrapper[4842]: I0202 06:59:06.932831 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.061749 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") pod \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.061855 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") pod \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.061903 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") pod \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\" (UID: \"7e244b75-9c3a-4f20-9bd7-071fb2cc7883\") " Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.062367 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle" (OuterVolumeSpecName: "bundle") pod "7e244b75-9c3a-4f20-9bd7-071fb2cc7883" (UID: "7e244b75-9c3a-4f20-9bd7-071fb2cc7883"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.073035 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h" (OuterVolumeSpecName: "kube-api-access-d5c4h") pod "7e244b75-9c3a-4f20-9bd7-071fb2cc7883" (UID: "7e244b75-9c3a-4f20-9bd7-071fb2cc7883"). InnerVolumeSpecName "kube-api-access-d5c4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.099320 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util" (OuterVolumeSpecName: "util") pod "7e244b75-9c3a-4f20-9bd7-071fb2cc7883" (UID: "7e244b75-9c3a-4f20-9bd7-071fb2cc7883"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.163448 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.163482 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-util\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.163494 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5c4h\" (UniqueName: \"kubernetes.io/projected/7e244b75-9c3a-4f20-9bd7-071fb2cc7883-kube-api-access-d5c4h\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.603681 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerStarted","Data":"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a"} Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.615989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" event={"ID":"7e244b75-9c3a-4f20-9bd7-071fb2cc7883","Type":"ContainerDied","Data":"d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c"} Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.616057 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d39e6b033f2cba8bf62594e0e22c48d6b1f154a990c26a31fb1ba280cdffca7c" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.616074 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n" Feb 02 06:59:07 crc kubenswrapper[4842]: I0202 06:59:07.635198 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8ltqd" podStartSLOduration=2.089322905 podStartE2EDuration="4.635177555s" podCreationTimestamp="2026-02-02 06:59:03 +0000 UTC" firstStartedPulling="2026-02-02 06:59:04.553239175 +0000 UTC m=+769.930507087" lastFinishedPulling="2026-02-02 06:59:07.099093785 +0000 UTC m=+772.476361737" observedRunningTime="2026-02-02 06:59:07.628844989 +0000 UTC m=+773.006112911" watchObservedRunningTime="2026-02-02 06:59:07.635177555 +0000 UTC m=+773.012445487" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862054 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6qznw"] Feb 02 06:59:11 crc kubenswrapper[4842]: E0202 06:59:11.862578 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="pull" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862597 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="pull" Feb 02 06:59:11 crc kubenswrapper[4842]: E0202 06:59:11.862614 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="util" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862621 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="util" Feb 02 06:59:11 crc kubenswrapper[4842]: E0202 06:59:11.862637 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="extract" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862647 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="extract" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.862762 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e244b75-9c3a-4f20-9bd7-071fb2cc7883" containerName="extract" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.863136 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.865031 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.865324 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.865398 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fpv6k" Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.876439 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6qznw"] Feb 02 06:59:11 crc kubenswrapper[4842]: I0202 06:59:11.926159 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9t2h\" (UniqueName: \"kubernetes.io/projected/3e9d6ba3-9c88-4425-87b9-8a5abd664ce7-kube-api-access-b9t2h\") pod \"nmstate-operator-646758c888-6qznw\" (UID: \"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7\") " pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.027194 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9t2h\" (UniqueName: \"kubernetes.io/projected/3e9d6ba3-9c88-4425-87b9-8a5abd664ce7-kube-api-access-b9t2h\") pod \"nmstate-operator-646758c888-6qznw\" (UID: \"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7\") " pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.062673 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9t2h\" (UniqueName: \"kubernetes.io/projected/3e9d6ba3-9c88-4425-87b9-8a5abd664ce7-kube-api-access-b9t2h\") pod \"nmstate-operator-646758c888-6qznw\" (UID: \"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7\") " pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.185806 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.428880 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-6qznw"] Feb 02 06:59:12 crc kubenswrapper[4842]: I0202 06:59:12.649375 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" event={"ID":"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7","Type":"ContainerStarted","Data":"36e20619a2ef69ebeef34d4e079a85e04f26457dffdd43d4fc16cce1a90fc032"} Feb 02 06:59:13 crc kubenswrapper[4842]: I0202 06:59:13.832769 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:13 crc kubenswrapper[4842]: I0202 06:59:13.833074 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:14 crc kubenswrapper[4842]: I0202 06:59:14.659711 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" event={"ID":"3e9d6ba3-9c88-4425-87b9-8a5abd664ce7","Type":"ContainerStarted","Data":"ce7de959462d86cd7cbde251da43a9514aa907ca1c9f308f5cee35247dd9e55d"} Feb 02 06:59:14 crc kubenswrapper[4842]: I0202 06:59:14.689418 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-6qznw" podStartSLOduration=2.049395907 podStartE2EDuration="3.689393652s" podCreationTimestamp="2026-02-02 06:59:11 +0000 UTC" firstStartedPulling="2026-02-02 06:59:12.434512527 +0000 UTC m=+777.811780439" lastFinishedPulling="2026-02-02 06:59:14.074510262 +0000 UTC m=+779.451778184" observedRunningTime="2026-02-02 06:59:14.682640226 +0000 UTC m=+780.059908148" watchObservedRunningTime="2026-02-02 06:59:14.689393652 +0000 UTC m=+780.066661604" Feb 02 06:59:14 crc kubenswrapper[4842]: I0202 06:59:14.885445 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8ltqd" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" probeResult="failure" output=< Feb 02 06:59:14 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 06:59:14 crc kubenswrapper[4842]: > Feb 02 06:59:15 crc kubenswrapper[4842]: I0202 06:59:15.840065 4842 scope.go:117] "RemoveContainer" containerID="eb46ef51b68530b7f2b8f5c7e049ebba4820dd4f4f0a8efd0feba8f483ed768d" Feb 02 06:59:16 crc kubenswrapper[4842]: I0202 06:59:16.677166 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gmkx9_c1fd21cd-ea6a-44a0-b136-f338fc97cf18/kube-multus/2.log" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.555184 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h4nv5"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.555987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.558617 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tq46r" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.568849 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz8b6\" (UniqueName: \"kubernetes.io/projected/a4c06cff-e4b9-41be-a253-b1bf70dc1dc8-kube-api-access-rz8b6\") pod \"nmstate-metrics-54757c584b-h4nv5\" (UID: \"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.584342 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h4nv5"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.597705 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.598533 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.601849 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.602042 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-hrqrp"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.603155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.639591 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670041 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz8b6\" (UniqueName: \"kubernetes.io/projected/a4c06cff-e4b9-41be-a253-b1bf70dc1dc8-kube-api-access-rz8b6\") pod \"nmstate-metrics-54757c584b-h4nv5\" (UID: \"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670121 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfpl\" (UniqueName: \"kubernetes.io/projected/558d578f-dad2-4317-8efd-628e30fe306e-kube-api-access-6pfpl\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670165 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-nmstate-lock\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670197 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-dbus-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670243 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-ovs-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670286 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9864264-6d23-4a03-8464-6b52a81c01d1-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.670313 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbmqd\" (UniqueName: \"kubernetes.io/projected/a9864264-6d23-4a03-8464-6b52a81c01d1-kube-api-access-tbmqd\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.695663 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz8b6\" (UniqueName: \"kubernetes.io/projected/a4c06cff-e4b9-41be-a253-b1bf70dc1dc8-kube-api-access-rz8b6\") pod \"nmstate-metrics-54757c584b-h4nv5\" (UID: \"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.708592 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.709347 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.711687 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-n7j2j" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.711949 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.712150 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.721203 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771032 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfpl\" (UniqueName: \"kubernetes.io/projected/558d578f-dad2-4317-8efd-628e30fe306e-kube-api-access-6pfpl\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771090 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-nmstate-lock\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771119 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpf6k\" (UniqueName: \"kubernetes.io/projected/1875099f-a0f5-4ba0-b757-35755a6d0bcd-kube-api-access-hpf6k\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771151 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1875099f-a0f5-4ba0-b757-35755a6d0bcd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-dbus-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771170 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-nmstate-lock\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771187 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771240 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-ovs-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9864264-6d23-4a03-8464-6b52a81c01d1-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771299 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-ovs-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771311 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbmqd\" (UniqueName: \"kubernetes.io/projected/a9864264-6d23-4a03-8464-6b52a81c01d1-kube-api-access-tbmqd\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.771468 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/558d578f-dad2-4317-8efd-628e30fe306e-dbus-socket\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.775850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a9864264-6d23-4a03-8464-6b52a81c01d1-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.794913 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbmqd\" (UniqueName: \"kubernetes.io/projected/a9864264-6d23-4a03-8464-6b52a81c01d1-kube-api-access-tbmqd\") pod \"nmstate-webhook-8474b5b9d8-ctgl4\" (UID: \"a9864264-6d23-4a03-8464-6b52a81c01d1\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.795060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfpl\" (UniqueName: \"kubernetes.io/projected/558d578f-dad2-4317-8efd-628e30fe306e-kube-api-access-6pfpl\") pod \"nmstate-handler-hrqrp\" (UID: \"558d578f-dad2-4317-8efd-628e30fe306e\") " pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.871866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1875099f-a0f5-4ba0-b757-35755a6d0bcd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.871923 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpf6k\" (UniqueName: \"kubernetes.io/projected/1875099f-a0f5-4ba0-b757-35755a6d0bcd-kube-api-access-hpf6k\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.871948 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: E0202 06:59:21.872101 4842 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 02 06:59:21 crc kubenswrapper[4842]: E0202 06:59:21.872159 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert podName:1875099f-a0f5-4ba0-b757-35755a6d0bcd nodeName:}" failed. No retries permitted until 2026-02-02 06:59:22.372140773 +0000 UTC m=+787.749408685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-z2jg2" (UID: "1875099f-a0f5-4ba0-b757-35755a6d0bcd") : secret "plugin-serving-cert" not found Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.872928 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1875099f-a0f5-4ba0-b757-35755a6d0bcd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.879056 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.898664 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpf6k\" (UniqueName: \"kubernetes.io/projected/1875099f-a0f5-4ba0-b757-35755a6d0bcd-kube-api-access-hpf6k\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.914110 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-655b6b84f6-kkbsq"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.914731 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.922984 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-kkbsq"] Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.940554 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.952420 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:21 crc kubenswrapper[4842]: W0202 06:59:21.972178 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod558d578f_dad2_4317_8efd_628e30fe306e.slice/crio-da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1 WatchSource:0}: Error finding container da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1: Status 404 returned error can't find the container with id da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1 Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.972896 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-trusted-ca-bundle\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.972947 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.972971 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-oauth-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973026 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh27l\" (UniqueName: \"kubernetes.io/projected/0ec80d70-e53d-4045-a2b7-a61ad0464be2-kube-api-access-gh27l\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973268 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-service-ca\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:21 crc kubenswrapper[4842]: I0202 06:59:21.973304 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-oauth-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074151 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-service-ca\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074198 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-oauth-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074247 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-trusted-ca-bundle\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074266 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074291 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-oauth-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074317 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.074331 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh27l\" (UniqueName: \"kubernetes.io/projected/0ec80d70-e53d-4045-a2b7-a61ad0464be2-kube-api-access-gh27l\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075524 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-service-ca\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075588 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-trusted-ca-bundle\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075648 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.075647 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ec80d70-e53d-4045-a2b7-a61ad0464be2-oauth-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.080496 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-serving-cert\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.081167 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ec80d70-e53d-4045-a2b7-a61ad0464be2-console-oauth-config\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.095734 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh27l\" (UniqueName: \"kubernetes.io/projected/0ec80d70-e53d-4045-a2b7-a61ad0464be2-kube-api-access-gh27l\") pod \"console-655b6b84f6-kkbsq\" (UID: \"0ec80d70-e53d-4045-a2b7-a61ad0464be2\") " pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.114428 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-h4nv5"] Feb 02 06:59:22 crc kubenswrapper[4842]: W0202 06:59:22.157159 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9864264_6d23_4a03_8464_6b52a81c01d1.slice/crio-5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3 WatchSource:0}: Error finding container 5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3: Status 404 returned error can't find the container with id 5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3 Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.157440 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4"] Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.237618 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.377750 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.383257 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1875099f-a0f5-4ba0-b757-35755a6d0bcd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-z2jg2\" (UID: \"1875099f-a0f5-4ba0-b757-35755a6d0bcd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.493610 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-655b6b84f6-kkbsq"] Feb 02 06:59:22 crc kubenswrapper[4842]: W0202 06:59:22.502042 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec80d70_e53d_4045_a2b7_a61ad0464be2.slice/crio-584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2 WatchSource:0}: Error finding container 584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2: Status 404 returned error can't find the container with id 584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2 Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.628060 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.809337 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hrqrp" event={"ID":"558d578f-dad2-4317-8efd-628e30fe306e","Type":"ContainerStarted","Data":"da157c5e0bbd98d95a22226d2f29c9527e0d821a20ada6e2f1875ca3b76c1ab1"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.810532 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" event={"ID":"a9864264-6d23-4a03-8464-6b52a81c01d1","Type":"ContainerStarted","Data":"5efc19c6eb39e7cc922e7169278eab47fa90d09bbd259348e66ebdd81a4848d3"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.811362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" event={"ID":"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8","Type":"ContainerStarted","Data":"d5c227670e422b5990993f1600b1bea0cd17c7c7b853776d7ab084a6a609bd35"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.817605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-kkbsq" event={"ID":"0ec80d70-e53d-4045-a2b7-a61ad0464be2","Type":"ContainerStarted","Data":"1f12aadf3b7e1323934209948c494ece5269230bd67c7ca6fcf6d600c8771f87"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.817676 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-655b6b84f6-kkbsq" event={"ID":"0ec80d70-e53d-4045-a2b7-a61ad0464be2","Type":"ContainerStarted","Data":"584c7eb65561d3875bbd94d5a62fa8945477f120bcd1dc59f7a380cfb3ed57a2"} Feb 02 06:59:22 crc kubenswrapper[4842]: I0202 06:59:22.843355 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-655b6b84f6-kkbsq" podStartSLOduration=1.843334548 podStartE2EDuration="1.843334548s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 06:59:22.839658998 +0000 UTC m=+788.216926960" watchObservedRunningTime="2026-02-02 06:59:22.843334548 +0000 UTC m=+788.220602470" Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.145334 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2"] Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.825521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" event={"ID":"1875099f-a0f5-4ba0-b757-35755a6d0bcd","Type":"ContainerStarted","Data":"110ec542951fe4790c0535382759d19648b9b0a377f494779d0c7737e915394e"} Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.875124 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:23 crc kubenswrapper[4842]: I0202 06:59:23.930774 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.113749 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.834147 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-hrqrp" event={"ID":"558d578f-dad2-4317-8efd-628e30fe306e","Type":"ContainerStarted","Data":"0f300c6d6e33c749d7219cadd322664a665ec3a7802a87d38b6295954a1c8fa7"} Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.835107 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.836031 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" event={"ID":"a9864264-6d23-4a03-8464-6b52a81c01d1","Type":"ContainerStarted","Data":"20420eabf4b1ba3779ed32b3886f9d38b14c1b9345038674f9ae67804f3490f9"} Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.836180 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.846432 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" event={"ID":"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8","Type":"ContainerStarted","Data":"fdde21f5e752186e690bd5c8bf4cda9461c234ab211e8e6deb9eb468de2ae398"} Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.849698 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-hrqrp" podStartSLOduration=1.7119673930000001 podStartE2EDuration="3.849686534s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:21.980338717 +0000 UTC m=+787.357606629" lastFinishedPulling="2026-02-02 06:59:24.118057818 +0000 UTC m=+789.495325770" observedRunningTime="2026-02-02 06:59:24.848357301 +0000 UTC m=+790.225625224" watchObservedRunningTime="2026-02-02 06:59:24.849686534 +0000 UTC m=+790.226954456" Feb 02 06:59:24 crc kubenswrapper[4842]: I0202 06:59:24.881124 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" podStartSLOduration=1.85111568 podStartE2EDuration="3.881098218s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:22.159610392 +0000 UTC m=+787.536878304" lastFinishedPulling="2026-02-02 06:59:24.18959289 +0000 UTC m=+789.566860842" observedRunningTime="2026-02-02 06:59:24.870081647 +0000 UTC m=+790.247349569" watchObservedRunningTime="2026-02-02 06:59:24.881098218 +0000 UTC m=+790.258366170" Feb 02 06:59:25 crc kubenswrapper[4842]: I0202 06:59:25.853947 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" event={"ID":"1875099f-a0f5-4ba0-b757-35755a6d0bcd","Type":"ContainerStarted","Data":"be97fab3866ef62f2e834b9c7047a4e6708c1bf5d2e9069a74fc6c1c53dea188"} Feb 02 06:59:25 crc kubenswrapper[4842]: I0202 06:59:25.854801 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8ltqd" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" containerID="cri-o://b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" gracePeriod=2 Feb 02 06:59:25 crc kubenswrapper[4842]: I0202 06:59:25.877652 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-z2jg2" podStartSLOduration=2.849464683 podStartE2EDuration="4.877633996s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:23.154776377 +0000 UTC m=+788.532044289" lastFinishedPulling="2026-02-02 06:59:25.18294565 +0000 UTC m=+790.560213602" observedRunningTime="2026-02-02 06:59:25.871878515 +0000 UTC m=+791.249146467" watchObservedRunningTime="2026-02-02 06:59:25.877633996 +0000 UTC m=+791.254901908" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.428694 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.540022 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") pod \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.540442 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") pod \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.540530 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") pod \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\" (UID: \"ad236f43-9e37-4d8d-bdf5-838729fd7aa9\") " Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.542804 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities" (OuterVolumeSpecName: "utilities") pod "ad236f43-9e37-4d8d-bdf5-838729fd7aa9" (UID: "ad236f43-9e37-4d8d-bdf5-838729fd7aa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.549069 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d" (OuterVolumeSpecName: "kube-api-access-4qr5d") pod "ad236f43-9e37-4d8d-bdf5-838729fd7aa9" (UID: "ad236f43-9e37-4d8d-bdf5-838729fd7aa9"). InnerVolumeSpecName "kube-api-access-4qr5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.642443 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qr5d\" (UniqueName: \"kubernetes.io/projected/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-kube-api-access-4qr5d\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.642488 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.676350 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad236f43-9e37-4d8d-bdf5-838729fd7aa9" (UID: "ad236f43-9e37-4d8d-bdf5-838729fd7aa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.744251 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad236f43-9e37-4d8d-bdf5-838729fd7aa9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865390 4842 generic.go:334] "Generic (PLEG): container finished" podID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" exitCode=0 Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a"} Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865539 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ltqd" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865568 4842 scope.go:117] "RemoveContainer" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.865550 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ltqd" event={"ID":"ad236f43-9e37-4d8d-bdf5-838729fd7aa9","Type":"ContainerDied","Data":"2ff52db295c35e880c88d5b5145e78895e572aab912fdc448aa89426cbd58de9"} Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.870004 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" event={"ID":"a4c06cff-e4b9-41be-a253-b1bf70dc1dc8","Type":"ContainerStarted","Data":"fb79482d9e32256b7087d6b760a4494bb83759f7391627be6bb55f49a53136b9"} Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.900688 4842 scope.go:117] "RemoveContainer" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.922633 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-h4nv5" podStartSLOduration=1.776941543 podStartE2EDuration="5.922605659s" podCreationTimestamp="2026-02-02 06:59:21 +0000 UTC" firstStartedPulling="2026-02-02 06:59:22.115498725 +0000 UTC m=+787.492766627" lastFinishedPulling="2026-02-02 06:59:26.261162831 +0000 UTC m=+791.638430743" observedRunningTime="2026-02-02 06:59:26.903403076 +0000 UTC m=+792.280671068" watchObservedRunningTime="2026-02-02 06:59:26.922605659 +0000 UTC m=+792.299873601" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.936905 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.944813 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8ltqd"] Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.951982 4842 scope.go:117] "RemoveContainer" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.968980 4842 scope.go:117] "RemoveContainer" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" Feb 02 06:59:26 crc kubenswrapper[4842]: E0202 06:59:26.969708 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a\": container with ID starting with b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a not found: ID does not exist" containerID="b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.969769 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a"} err="failed to get container status \"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a\": rpc error: code = NotFound desc = could not find container \"b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a\": container with ID starting with b357f288beaed7b2219514f3179704e90cd47a1fd0c17fcc7ad6a7a72606c46a not found: ID does not exist" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.969809 4842 scope.go:117] "RemoveContainer" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" Feb 02 06:59:26 crc kubenswrapper[4842]: E0202 06:59:26.970337 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3\": container with ID starting with 144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3 not found: ID does not exist" containerID="144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.970380 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3"} err="failed to get container status \"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3\": rpc error: code = NotFound desc = could not find container \"144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3\": container with ID starting with 144e86b73203127253808ef02a958ca83742ae341b33ae851a29e2f3d4ef61f3 not found: ID does not exist" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.970413 4842 scope.go:117] "RemoveContainer" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" Feb 02 06:59:26 crc kubenswrapper[4842]: E0202 06:59:26.970774 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f\": container with ID starting with e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f not found: ID does not exist" containerID="e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f" Feb 02 06:59:26 crc kubenswrapper[4842]: I0202 06:59:26.970802 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f"} err="failed to get container status \"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f\": rpc error: code = NotFound desc = could not find container \"e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f\": container with ID starting with e1b28ff34be39ba62330ea3a8164e321a1af259f18d1bac47177888c7fca820f not found: ID does not exist" Feb 02 06:59:27 crc kubenswrapper[4842]: E0202 06:59:27.001818 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad236f43_9e37_4d8d_bdf5_838729fd7aa9.slice/crio-2ff52db295c35e880c88d5b5145e78895e572aab912fdc448aa89426cbd58de9\": RecentStats: unable to find data in memory cache]" Feb 02 06:59:27 crc kubenswrapper[4842]: I0202 06:59:27.447644 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" path="/var/lib/kubelet/pods/ad236f43-9e37-4d8d-bdf5-838729fd7aa9/volumes" Feb 02 06:59:31 crc kubenswrapper[4842]: I0202 06:59:31.988458 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-hrqrp" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.237993 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.238071 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.245258 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.933409 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-655b6b84f6-kkbsq" Feb 02 06:59:32 crc kubenswrapper[4842]: I0202 06:59:32.997987 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:59:41 crc kubenswrapper[4842]: I0202 06:59:41.948654 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-ctgl4" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.323667 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp"] Feb 02 06:59:56 crc kubenswrapper[4842]: E0202 06:59:56.324588 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324609 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" Feb 02 06:59:56 crc kubenswrapper[4842]: E0202 06:59:56.324626 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-utilities" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324639 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-utilities" Feb 02 06:59:56 crc kubenswrapper[4842]: E0202 06:59:56.324672 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-content" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324685 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="extract-content" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.324878 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad236f43-9e37-4d8d-bdf5-838729fd7aa9" containerName="registry-server" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.326155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.332994 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.333178 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp"] Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.494368 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.494801 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.494883 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.596803 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.596969 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.597031 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.598039 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.598193 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.633179 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:56 crc kubenswrapper[4842]: I0202 06:59:56.657622 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 06:59:57 crc kubenswrapper[4842]: I0202 06:59:57.170091 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp"] Feb 02 06:59:57 crc kubenswrapper[4842]: W0202 06:59:57.184724 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4e0f2b_3826_4669_8732_05eb885adfe5.slice/crio-6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4 WatchSource:0}: Error finding container 6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4: Status 404 returned error can't find the container with id 6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4 Feb 02 06:59:57 crc kubenswrapper[4842]: E0202 06:59:57.524385 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4e0f2b_3826_4669_8732_05eb885adfe5.slice/crio-ed092ac8bf4cbba920b50ca964aa67edb99175fc3f707a1dbf75a3945e77fedf.scope\": RecentStats: unable to find data in memory cache]" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.067895 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-kmw8f" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" containerID="cri-o://87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" gracePeriod=15 Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.111333 4842 generic.go:334] "Generic (PLEG): container finished" podID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerID="ed092ac8bf4cbba920b50ca964aa67edb99175fc3f707a1dbf75a3945e77fedf" exitCode=0 Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.111485 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"ed092ac8bf4cbba920b50ca964aa67edb99175fc3f707a1dbf75a3945e77fedf"} Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.111539 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerStarted","Data":"6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4"} Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.521994 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kmw8f_59990591-2248-489b-bac2-e7cab22482f8/console/0.log" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.522392 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624458 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624503 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624552 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624573 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624605 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.624744 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") pod \"59990591-2248-489b-bac2-e7cab22482f8\" (UID: \"59990591-2248-489b-bac2-e7cab22482f8\") " Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626079 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca" (OuterVolumeSpecName: "service-ca") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626158 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config" (OuterVolumeSpecName: "console-config") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626190 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.626251 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.643540 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.644806 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.645003 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2" (OuterVolumeSpecName: "kube-api-access-wpmb2") pod "59990591-2248-489b-bac2-e7cab22482f8" (UID: "59990591-2248-489b-bac2-e7cab22482f8"). InnerVolumeSpecName "kube-api-access-wpmb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726150 4842 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726247 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpmb2\" (UniqueName: \"kubernetes.io/projected/59990591-2248-489b-bac2-e7cab22482f8-kube-api-access-wpmb2\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726279 4842 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-console-config\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726304 4842 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-service-ca\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726333 4842 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726361 4842 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/59990591-2248-489b-bac2-e7cab22482f8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:58 crc kubenswrapper[4842]: I0202 06:59:58.726385 4842 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59990591-2248-489b-bac2-e7cab22482f8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120563 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-kmw8f_59990591-2248-489b-bac2-e7cab22482f8/console/0.log" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120637 4842 generic.go:334] "Generic (PLEG): container finished" podID="59990591-2248-489b-bac2-e7cab22482f8" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" exitCode=2 Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120680 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerDied","Data":"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434"} Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120716 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-kmw8f" event={"ID":"59990591-2248-489b-bac2-e7cab22482f8","Type":"ContainerDied","Data":"f626d676ce0b2dbd85f858b166fb0050d475783a83143a42e19f369ae37353e6"} Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120746 4842 scope.go:117] "RemoveContainer" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.120909 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-kmw8f" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.172535 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.180537 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-kmw8f"] Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.228641 4842 scope.go:117] "RemoveContainer" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" Feb 02 06:59:59 crc kubenswrapper[4842]: E0202 06:59:59.229262 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434\": container with ID starting with 87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434 not found: ID does not exist" containerID="87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.229302 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434"} err="failed to get container status \"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434\": rpc error: code = NotFound desc = could not find container \"87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434\": container with ID starting with 87c6b411dfe277d9ab669c640478cf0b6070af5d629655273a23697ab8ba0434 not found: ID does not exist" Feb 02 06:59:59 crc kubenswrapper[4842]: I0202 06:59:59.451189 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59990591-2248-489b-bac2-e7cab22482f8" path="/var/lib/kubelet/pods/59990591-2248-489b-bac2-e7cab22482f8/volumes" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.132263 4842 generic.go:334] "Generic (PLEG): container finished" podID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerID="53158c113d43cbb2bb783b307208a0f826a90fb0a10ad9e93767be3d50edb5ea" exitCode=0 Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.132326 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"53158c113d43cbb2bb783b307208a0f826a90fb0a10ad9e93767be3d50edb5ea"} Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.195140 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:00:00 crc kubenswrapper[4842]: E0202 07:00:00.199855 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.199903 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.200188 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="59990591-2248-489b-bac2-e7cab22482f8" containerName="console" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.201254 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.204877 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.205953 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.208604 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.350047 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.350657 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.350726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.451368 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.451412 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.451435 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.452467 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.459505 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.483030 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"collect-profiles-29500260-8hlgn\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.560524 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:00 crc kubenswrapper[4842]: I0202 07:00:00.824864 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:00:00 crc kubenswrapper[4842]: W0202 07:00:00.836442 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda36ad95_63f3_4cfb_8da7_96b730ccc79b.slice/crio-fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a WatchSource:0}: Error finding container fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a: Status 404 returned error can't find the container with id fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.141658 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerStarted","Data":"dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49"} Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.141717 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerStarted","Data":"fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a"} Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.147143 4842 generic.go:334] "Generic (PLEG): container finished" podID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerID="01cc3645a9de560ef76c7015efc21ddb5ce809fbe1708e54bbcf1d0de5f30d75" exitCode=0 Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.147184 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"01cc3645a9de560ef76c7015efc21ddb5ce809fbe1708e54bbcf1d0de5f30d75"} Feb 02 07:00:01 crc kubenswrapper[4842]: I0202 07:00:01.166126 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" podStartSLOduration=1.166106288 podStartE2EDuration="1.166106288s" podCreationTimestamp="2026-02-02 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:00:01.161575466 +0000 UTC m=+826.538843408" watchObservedRunningTime="2026-02-02 07:00:01.166106288 +0000 UTC m=+826.543374210" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.153970 4842 generic.go:334] "Generic (PLEG): container finished" podID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerID="dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49" exitCode=0 Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.154061 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerDied","Data":"dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49"} Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.460124 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.581477 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") pod \"bb4e0f2b-3826-4669-8732-05eb885adfe5\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.582872 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") pod \"bb4e0f2b-3826-4669-8732-05eb885adfe5\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.582988 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") pod \"bb4e0f2b-3826-4669-8732-05eb885adfe5\" (UID: \"bb4e0f2b-3826-4669-8732-05eb885adfe5\") " Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.584507 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle" (OuterVolumeSpecName: "bundle") pod "bb4e0f2b-3826-4669-8732-05eb885adfe5" (UID: "bb4e0f2b-3826-4669-8732-05eb885adfe5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.590105 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x" (OuterVolumeSpecName: "kube-api-access-zgn7x") pod "bb4e0f2b-3826-4669-8732-05eb885adfe5" (UID: "bb4e0f2b-3826-4669-8732-05eb885adfe5"). InnerVolumeSpecName "kube-api-access-zgn7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.603726 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util" (OuterVolumeSpecName: "util") pod "bb4e0f2b-3826-4669-8732-05eb885adfe5" (UID: "bb4e0f2b-3826-4669-8732-05eb885adfe5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.685446 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgn7x\" (UniqueName: \"kubernetes.io/projected/bb4e0f2b-3826-4669-8732-05eb885adfe5-kube-api-access-zgn7x\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.685522 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:02 crc kubenswrapper[4842]: I0202 07:00:02.685544 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb4e0f2b-3826-4669-8732-05eb885adfe5-util\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.169811 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.169804 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp" event={"ID":"bb4e0f2b-3826-4669-8732-05eb885adfe5","Type":"ContainerDied","Data":"6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4"} Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.170045 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6730659c3a7373b1a89b3d0bb6b20152699850dfd1a17dcbce4ec3f7dadec6b4" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.518730 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.701508 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") pod \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.701619 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") pod \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.701659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") pod \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\" (UID: \"da36ad95-63f3-4cfb-8da7-96b730ccc79b\") " Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.702577 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume" (OuterVolumeSpecName: "config-volume") pod "da36ad95-63f3-4cfb-8da7-96b730ccc79b" (UID: "da36ad95-63f3-4cfb-8da7-96b730ccc79b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.707517 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5" (OuterVolumeSpecName: "kube-api-access-rl8n5") pod "da36ad95-63f3-4cfb-8da7-96b730ccc79b" (UID: "da36ad95-63f3-4cfb-8da7-96b730ccc79b"). InnerVolumeSpecName "kube-api-access-rl8n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.708016 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "da36ad95-63f3-4cfb-8da7-96b730ccc79b" (UID: "da36ad95-63f3-4cfb-8da7-96b730ccc79b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.803847 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/da36ad95-63f3-4cfb-8da7-96b730ccc79b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.803904 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da36ad95-63f3-4cfb-8da7-96b730ccc79b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:03 crc kubenswrapper[4842]: I0202 07:00:03.803926 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl8n5\" (UniqueName: \"kubernetes.io/projected/da36ad95-63f3-4cfb-8da7-96b730ccc79b-kube-api-access-rl8n5\") on node \"crc\" DevicePath \"\"" Feb 02 07:00:04 crc kubenswrapper[4842]: I0202 07:00:04.185729 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" Feb 02 07:00:04 crc kubenswrapper[4842]: I0202 07:00:04.185736 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn" event={"ID":"da36ad95-63f3-4cfb-8da7-96b730ccc79b","Type":"ContainerDied","Data":"fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a"} Feb 02 07:00:04 crc kubenswrapper[4842]: I0202 07:00:04.185926 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdce6edc635982c7b3d799c8647b640c3683122c52bfad2ac4bc2368d96f8f3a" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.277412 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc"] Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278122 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="util" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278137 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="util" Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278158 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="extract" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278167 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="extract" Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278179 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerName="collect-profiles" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278187 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerName="collect-profiles" Feb 02 07:00:11 crc kubenswrapper[4842]: E0202 07:00:11.278205 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="pull" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278234 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="pull" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278345 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4e0f2b-3826-4669-8732-05eb885adfe5" containerName="extract" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278365 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" containerName="collect-profiles" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.278763 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.280641 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8hchk" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.282706 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.283108 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.283329 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.292823 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.316227 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc"] Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.406160 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-webhook-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.406453 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-apiservice-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.406602 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/b3b00acd-6687-457f-8744-7057f840e5bd-kube-api-access-b4mq5\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.507719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-webhook-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.507770 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-apiservice-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.507802 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/b3b00acd-6687-457f-8744-7057f840e5bd-kube-api-access-b4mq5\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.514189 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-apiservice-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.515915 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3b00acd-6687-457f-8744-7057f840e5bd-webhook-cert\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.531166 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9"] Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.532175 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.534286 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.534581 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6prb5" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.535200 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.542798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/b3b00acd-6687-457f-8744-7057f840e5bd-kube-api-access-b4mq5\") pod \"metallb-operator-controller-manager-74749cc964-2p2rc\" (UID: \"b3b00acd-6687-457f-8744-7057f840e5bd\") " pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.582535 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9"] Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.598244 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.710558 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-apiservice-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.710851 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-webhook-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.711007 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5tzz\" (UniqueName: \"kubernetes.io/projected/793714c2-9e47-4e82-a201-e2e8ac9d7bff-kube-api-access-g5tzz\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.812147 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-webhook-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.812236 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5tzz\" (UniqueName: \"kubernetes.io/projected/793714c2-9e47-4e82-a201-e2e8ac9d7bff-kube-api-access-g5tzz\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.812258 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-apiservice-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.817890 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-apiservice-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.828017 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5tzz\" (UniqueName: \"kubernetes.io/projected/793714c2-9e47-4e82-a201-e2e8ac9d7bff-kube-api-access-g5tzz\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.828479 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/793714c2-9e47-4e82-a201-e2e8ac9d7bff-webhook-cert\") pod \"metallb-operator-webhook-server-7f569b8d8f-wvbf9\" (UID: \"793714c2-9e47-4e82-a201-e2e8ac9d7bff\") " pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.874528 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:11 crc kubenswrapper[4842]: I0202 07:00:11.911946 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc"] Feb 02 07:00:11 crc kubenswrapper[4842]: W0202 07:00:11.921153 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3b00acd_6687_457f_8744_7057f840e5bd.slice/crio-7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee WatchSource:0}: Error finding container 7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee: Status 404 returned error can't find the container with id 7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee Feb 02 07:00:12 crc kubenswrapper[4842]: I0202 07:00:12.118672 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9"] Feb 02 07:00:12 crc kubenswrapper[4842]: W0202 07:00:12.126452 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod793714c2_9e47_4e82_a201_e2e8ac9d7bff.slice/crio-bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae WatchSource:0}: Error finding container bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae: Status 404 returned error can't find the container with id bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae Feb 02 07:00:12 crc kubenswrapper[4842]: I0202 07:00:12.233794 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" event={"ID":"793714c2-9e47-4e82-a201-e2e8ac9d7bff","Type":"ContainerStarted","Data":"bfe70952017243c23de28187630e2460d3f780b1e7d3ba9e9a3934900eb2ecae"} Feb 02 07:00:12 crc kubenswrapper[4842]: I0202 07:00:12.234721 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" event={"ID":"b3b00acd-6687-457f-8744-7057f840e5bd","Type":"ContainerStarted","Data":"7e0395fd61c3381217732e5cd4cc388e5494d9adeb18d4e7834155efed3ce7ee"} Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.263964 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" event={"ID":"b3b00acd-6687-457f-8744-7057f840e5bd","Type":"ContainerStarted","Data":"b4b36f0fab828459cd9384225a61edafc93a85b49a1286f4b49de5a26b26d8d6"} Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.264910 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.266417 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" event={"ID":"793714c2-9e47-4e82-a201-e2e8ac9d7bff","Type":"ContainerStarted","Data":"1bdaceb2b2fc4d1e07a515c090032882ebac5945f6e75210bddcaedb9529a0da"} Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.266685 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:16 crc kubenswrapper[4842]: I0202 07:00:16.302814 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" podStartSLOduration=1.397689371 podStartE2EDuration="5.302780732s" podCreationTimestamp="2026-02-02 07:00:11 +0000 UTC" firstStartedPulling="2026-02-02 07:00:11.923063453 +0000 UTC m=+837.300331355" lastFinishedPulling="2026-02-02 07:00:15.828154804 +0000 UTC m=+841.205422716" observedRunningTime="2026-02-02 07:00:16.294859087 +0000 UTC m=+841.672127059" watchObservedRunningTime="2026-02-02 07:00:16.302780732 +0000 UTC m=+841.680048684" Feb 02 07:00:31 crc kubenswrapper[4842]: I0202 07:00:31.879978 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" Feb 02 07:00:31 crc kubenswrapper[4842]: I0202 07:00:31.918590 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f569b8d8f-wvbf9" podStartSLOduration=17.147146949 podStartE2EDuration="20.918561729s" podCreationTimestamp="2026-02-02 07:00:11 +0000 UTC" firstStartedPulling="2026-02-02 07:00:12.129405954 +0000 UTC m=+837.506673866" lastFinishedPulling="2026-02-02 07:00:15.900820734 +0000 UTC m=+841.278088646" observedRunningTime="2026-02-02 07:00:16.334512813 +0000 UTC m=+841.711780735" watchObservedRunningTime="2026-02-02 07:00:31.918561729 +0000 UTC m=+857.295829671" Feb 02 07:00:42 crc kubenswrapper[4842]: I0202 07:00:42.146473 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:00:42 crc kubenswrapper[4842]: I0202 07:00:42.147064 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:00:51 crc kubenswrapper[4842]: I0202 07:00:51.602138 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74749cc964-2p2rc" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.354287 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fvmtq"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.356823 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.358680 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.358953 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.361013 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lg845" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.365899 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.366841 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.371496 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.371547 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.460763 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-74hmd"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.461545 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.463416 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.463734 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.464103 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-4kbng" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.464615 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.475619 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7h9kp"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.476433 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.485763 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505719 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505776 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-reloader\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505792 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics-certs\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505831 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-startup\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505860 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklwf\" (UniqueName: \"kubernetes.io/projected/412f3125-792a-4cb4-858e-e0376903066a-kube-api-access-pklwf\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5znk\" (UniqueName: \"kubernetes.io/projected/79110fb7-d2a2-4330-ab4b-d717a7b943e6-kube-api-access-c5znk\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505894 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505910 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-sockets\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.505924 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-conf\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.506267 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7h9kp"] Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606728 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stm2c\" (UniqueName: \"kubernetes.io/projected/3016a0a1-abd6-486a-af0b-cf4c7b8db672-kube-api-access-stm2c\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606774 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606799 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-reloader\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606813 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics-certs\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606842 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606865 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metrics-certs\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606886 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-startup\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxnw\" (UniqueName: \"kubernetes.io/projected/890c2fc6-f70e-47e4-8578-908ec14d719f-kube-api-access-mdxnw\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606918 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metallb-excludel2\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606939 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-metrics-certs\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606962 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklwf\" (UniqueName: \"kubernetes.io/projected/412f3125-792a-4cb4-858e-e0376903066a-kube-api-access-pklwf\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.606977 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5znk\" (UniqueName: \"kubernetes.io/projected/79110fb7-d2a2-4330-ab4b-d717a7b943e6-kube-api-access-c5znk\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607000 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607018 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-cert\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607039 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-sockets\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607058 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-conf\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607478 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-conf\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.607642 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.607876 4842 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.607980 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert podName:412f3125-792a-4cb4-858e-e0376903066a nodeName:}" failed. No retries permitted until 2026-02-02 07:00:53.107963465 +0000 UTC m=+878.485231377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert") pod "frr-k8s-webhook-server-7df86c4f6c-ksx75" (UID: "412f3125-792a-4cb4-858e-e0376903066a") : secret "frr-k8s-webhook-server-cert" not found Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.608381 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-sockets\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.608431 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/79110fb7-d2a2-4330-ab4b-d717a7b943e6-reloader\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.609754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/79110fb7-d2a2-4330-ab4b-d717a7b943e6-frr-startup\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.612835 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79110fb7-d2a2-4330-ab4b-d717a7b943e6-metrics-certs\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.625667 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5znk\" (UniqueName: \"kubernetes.io/projected/79110fb7-d2a2-4330-ab4b-d717a7b943e6-kube-api-access-c5znk\") pod \"frr-k8s-fvmtq\" (UID: \"79110fb7-d2a2-4330-ab4b-d717a7b943e6\") " pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.626659 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklwf\" (UniqueName: \"kubernetes.io/projected/412f3125-792a-4cb4-858e-e0376903066a-kube-api-access-pklwf\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.688790 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-cert\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stm2c\" (UniqueName: \"kubernetes.io/projected/3016a0a1-abd6-486a-af0b-cf4c7b8db672-kube-api-access-stm2c\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707953 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metrics-certs\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707977 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxnw\" (UniqueName: \"kubernetes.io/projected/890c2fc6-f70e-47e4-8578-908ec14d719f-kube-api-access-mdxnw\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.707990 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metallb-excludel2\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.708010 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-metrics-certs\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.708458 4842 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 07:00:52 crc kubenswrapper[4842]: E0202 07:00:52.708542 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist podName:3016a0a1-abd6-486a-af0b-cf4c7b8db672 nodeName:}" failed. No retries permitted until 2026-02-02 07:00:53.208519352 +0000 UTC m=+878.585787274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist") pod "speaker-74hmd" (UID: "3016a0a1-abd6-486a-af0b-cf4c7b8db672") : secret "metallb-memberlist" not found Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.709400 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metallb-excludel2\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.713438 4842 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.713774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-metrics-certs\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.715321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-metrics-certs\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.726774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/890c2fc6-f70e-47e4-8578-908ec14d719f-cert\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.733654 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxnw\" (UniqueName: \"kubernetes.io/projected/890c2fc6-f70e-47e4-8578-908ec14d719f-kube-api-access-mdxnw\") pod \"controller-6968d8fdc4-7h9kp\" (UID: \"890c2fc6-f70e-47e4-8578-908ec14d719f\") " pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.735737 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stm2c\" (UniqueName: \"kubernetes.io/projected/3016a0a1-abd6-486a-af0b-cf4c7b8db672-kube-api-access-stm2c\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:52 crc kubenswrapper[4842]: I0202 07:00:52.800258 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.009777 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7h9kp"] Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.113172 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.119940 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/412f3125-792a-4cb4-858e-e0376903066a-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksx75\" (UID: \"412f3125-792a-4cb4-858e-e0376903066a\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.214169 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:53 crc kubenswrapper[4842]: E0202 07:00:53.214391 4842 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 02 07:00:53 crc kubenswrapper[4842]: E0202 07:00:53.214444 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist podName:3016a0a1-abd6-486a-af0b-cf4c7b8db672 nodeName:}" failed. No retries permitted until 2026-02-02 07:00:54.214427034 +0000 UTC m=+879.591694956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist") pod "speaker-74hmd" (UID: "3016a0a1-abd6-486a-af0b-cf4c7b8db672") : secret "metallb-memberlist" not found Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.299196 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521044 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7h9kp" event={"ID":"890c2fc6-f70e-47e4-8578-908ec14d719f","Type":"ContainerStarted","Data":"ec2bbc8fc0ebee72e24fff4d4806a8261d5eecdfd92fdf5f95b216de757c206b"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521459 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7h9kp" event={"ID":"890c2fc6-f70e-47e4-8578-908ec14d719f","Type":"ContainerStarted","Data":"ee504b0714bc44442a0347785bd5db2a0c7c096bf32556f9f7493aa1ca07470b"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7h9kp" event={"ID":"890c2fc6-f70e-47e4-8578-908ec14d719f","Type":"ContainerStarted","Data":"f5d2eed0060f1351d0e20bc0136139b2acdb5fa7d90989467bcba3d37d8c9991"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.521531 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.522889 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"33dd9792e3bd6e4f15e12a23878d10f12b6c9602aceb64676a77e4372ac8b26d"} Feb 02 07:00:53 crc kubenswrapper[4842]: I0202 07:00:53.529984 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75"] Feb 02 07:00:53 crc kubenswrapper[4842]: W0202 07:00:53.533713 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod412f3125_792a_4cb4_858e_e0376903066a.slice/crio-f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6 WatchSource:0}: Error finding container f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6: Status 404 returned error can't find the container with id f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6 Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.227040 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.238139 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3016a0a1-abd6-486a-af0b-cf4c7b8db672-memberlist\") pod \"speaker-74hmd\" (UID: \"3016a0a1-abd6-486a-af0b-cf4c7b8db672\") " pod="metallb-system/speaker-74hmd" Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.273052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-74hmd" Feb 02 07:00:54 crc kubenswrapper[4842]: W0202 07:00:54.292650 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3016a0a1_abd6_486a_af0b_cf4c7b8db672.slice/crio-7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633 WatchSource:0}: Error finding container 7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633: Status 404 returned error can't find the container with id 7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633 Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.535856 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-74hmd" event={"ID":"3016a0a1-abd6-486a-af0b-cf4c7b8db672","Type":"ContainerStarted","Data":"7314f95cc89f2fbb1b90aea157628119736d4c00e316e7b9c306ca0928604633"} Feb 02 07:00:54 crc kubenswrapper[4842]: I0202 07:00:54.540496 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" event={"ID":"412f3125-792a-4cb4-858e-e0376903066a","Type":"ContainerStarted","Data":"f521c257c7061e45eb62834ec8a7cf20c56c26d1fb4fb371d74dbb601f4988b6"} Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.466632 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7h9kp" podStartSLOduration=3.466615633 podStartE2EDuration="3.466615633s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:00:53.542818334 +0000 UTC m=+878.920086316" watchObservedRunningTime="2026-02-02 07:00:55.466615633 +0000 UTC m=+880.843883535" Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.554582 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-74hmd" event={"ID":"3016a0a1-abd6-486a-af0b-cf4c7b8db672","Type":"ContainerStarted","Data":"43db8442d1563ae29224fbbd0701a1b4df347189ea1bc859f26d34ea5a5ce252"} Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.554625 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-74hmd" event={"ID":"3016a0a1-abd6-486a-af0b-cf4c7b8db672","Type":"ContainerStarted","Data":"671c7439cc5f5922688bd073539a02a0a5964c14fb1abd24c5828de35900fa25"} Feb 02 07:00:55 crc kubenswrapper[4842]: I0202 07:00:55.555351 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-74hmd" Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.596612 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" event={"ID":"412f3125-792a-4cb4-858e-e0376903066a","Type":"ContainerStarted","Data":"020bc96addd5d327377e6a31361f3fed0f7d394ecb75a60a59988934e8d2d5a0"} Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.597138 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.600811 4842 generic.go:334] "Generic (PLEG): container finished" podID="79110fb7-d2a2-4330-ab4b-d717a7b943e6" containerID="78e963621fb75711833339baa2efff3c2e3b5d625f9d32fc65d4177236ca375f" exitCode=0 Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.600881 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerDied","Data":"78e963621fb75711833339baa2efff3c2e3b5d625f9d32fc65d4177236ca375f"} Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.616563 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-74hmd" podStartSLOduration=8.616531122 podStartE2EDuration="8.616531122s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:00:55.615210433 +0000 UTC m=+880.992478345" watchObservedRunningTime="2026-02-02 07:01:00.616531122 +0000 UTC m=+885.993799084" Feb 02 07:01:00 crc kubenswrapper[4842]: I0202 07:01:00.623057 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" podStartSLOduration=2.112287211 podStartE2EDuration="8.623029802s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="2026-02-02 07:00:53.536351544 +0000 UTC m=+878.913619466" lastFinishedPulling="2026-02-02 07:01:00.047094115 +0000 UTC m=+885.424362057" observedRunningTime="2026-02-02 07:01:00.613759554 +0000 UTC m=+885.991027476" watchObservedRunningTime="2026-02-02 07:01:00.623029802 +0000 UTC m=+886.000297754" Feb 02 07:01:01 crc kubenswrapper[4842]: I0202 07:01:01.611904 4842 generic.go:334] "Generic (PLEG): container finished" podID="79110fb7-d2a2-4330-ab4b-d717a7b943e6" containerID="cd86e7e997837db99ee68635c8a505dfedb823af70b9d37d72b83c4ed6d88c2b" exitCode=0 Feb 02 07:01:01 crc kubenswrapper[4842]: I0202 07:01:01.612022 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerDied","Data":"cd86e7e997837db99ee68635c8a505dfedb823af70b9d37d72b83c4ed6d88c2b"} Feb 02 07:01:02 crc kubenswrapper[4842]: I0202 07:01:02.624862 4842 generic.go:334] "Generic (PLEG): container finished" podID="79110fb7-d2a2-4330-ab4b-d717a7b943e6" containerID="af140bdc1a99d830d21c65581b94a11cd63957551b80f2db7f99e580a1886814" exitCode=0 Feb 02 07:01:02 crc kubenswrapper[4842]: I0202 07:01:02.625001 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerDied","Data":"af140bdc1a99d830d21c65581b94a11cd63957551b80f2db7f99e580a1886814"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652054 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"b1fb9fb1718478fa0c4cc12b65cd0801e789795d74f4c12188350256a042a05d"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652428 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"f814b5f20b461008e484354f68963ee4388458bd5a761f5d14f77b0da409d365"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"01293b03f1d4c47f3076616d43ceb8750ee09ff878ecb94fe322c7f2e548c684"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652469 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"93521f150f2ef4cbe56c0b2a112d3b082ddb7b5d2baba5a4dc3188d1f48f53fc"} Feb 02 07:01:03 crc kubenswrapper[4842]: I0202 07:01:03.652486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"97e04fac79e56e025f2139eaf2a01691780f3c41aa86fcf7e02c6b4f080c6518"} Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.279405 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-74hmd" Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.666868 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fvmtq" event={"ID":"79110fb7-d2a2-4330-ab4b-d717a7b943e6","Type":"ContainerStarted","Data":"549273cdb5500a33a748d465e785dad1ad378ba2a110d1377572139c81cf3255"} Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.667064 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:04 crc kubenswrapper[4842]: I0202 07:01:04.696969 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fvmtq" podStartSLOduration=5.469614473 podStartE2EDuration="12.696953196s" podCreationTimestamp="2026-02-02 07:00:52 +0000 UTC" firstStartedPulling="2026-02-02 07:00:52.856931708 +0000 UTC m=+878.234199620" lastFinishedPulling="2026-02-02 07:01:00.084270421 +0000 UTC m=+885.461538343" observedRunningTime="2026-02-02 07:01:04.696139576 +0000 UTC m=+890.073407498" watchObservedRunningTime="2026-02-02 07:01:04.696953196 +0000 UTC m=+890.074221108" Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.858933 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw"] Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.861900 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.866271 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 02 07:01:05 crc kubenswrapper[4842]: I0202 07:01:05.870078 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw"] Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.028999 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.029298 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.029425 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.130460 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.130541 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.130641 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.131199 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.131444 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.157026 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.186357 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:06 crc kubenswrapper[4842]: I0202 07:01:06.660433 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw"] Feb 02 07:01:06 crc kubenswrapper[4842]: W0202 07:01:06.677828 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68358186_3b13_493a_9141_c206629af46e.slice/crio-f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102 WatchSource:0}: Error finding container f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102: Status 404 returned error can't find the container with id f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102 Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.688995 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.689786 4842 generic.go:334] "Generic (PLEG): container finished" podID="68358186-3b13-493a-9141-c206629af46e" containerID="f7d334b0386fa7d7f040c48b8f37d0d5d3b0e45d2f8371acf22dba51ce3bfb04" exitCode=0 Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.689847 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"f7d334b0386fa7d7f040c48b8f37d0d5d3b0e45d2f8371acf22dba51ce3bfb04"} Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.689880 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerStarted","Data":"f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102"} Feb 02 07:01:07 crc kubenswrapper[4842]: I0202 07:01:07.766266 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:11 crc kubenswrapper[4842]: I0202 07:01:11.743741 4842 generic.go:334] "Generic (PLEG): container finished" podID="68358186-3b13-493a-9141-c206629af46e" containerID="73b7f7d4f7e26bb9f9bc1dab6a87bd9e36d8745b43faf72afee527b98add84a0" exitCode=0 Feb 02 07:01:11 crc kubenswrapper[4842]: I0202 07:01:11.743860 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"73b7f7d4f7e26bb9f9bc1dab6a87bd9e36d8745b43faf72afee527b98add84a0"} Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.146868 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.147012 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.692894 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fvmtq" Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.753718 4842 generic.go:334] "Generic (PLEG): container finished" podID="68358186-3b13-493a-9141-c206629af46e" containerID="d8bae5a377ac8095538b04933b8f72015496b12fb4ebc40f444eab2deb29f116" exitCode=0 Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.753764 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"d8bae5a377ac8095538b04933b8f72015496b12fb4ebc40f444eab2deb29f116"} Feb 02 07:01:12 crc kubenswrapper[4842]: I0202 07:01:12.803994 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7h9kp" Feb 02 07:01:13 crc kubenswrapper[4842]: I0202 07:01:13.307835 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksx75" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.030938 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.155433 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") pod \"68358186-3b13-493a-9141-c206629af46e\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.155793 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") pod \"68358186-3b13-493a-9141-c206629af46e\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.156169 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") pod \"68358186-3b13-493a-9141-c206629af46e\" (UID: \"68358186-3b13-493a-9141-c206629af46e\") " Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.157179 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle" (OuterVolumeSpecName: "bundle") pod "68358186-3b13-493a-9141-c206629af46e" (UID: "68358186-3b13-493a-9141-c206629af46e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.165803 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw" (OuterVolumeSpecName: "kube-api-access-j6smw") pod "68358186-3b13-493a-9141-c206629af46e" (UID: "68358186-3b13-493a-9141-c206629af46e"). InnerVolumeSpecName "kube-api-access-j6smw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.168374 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util" (OuterVolumeSpecName: "util") pod "68358186-3b13-493a-9141-c206629af46e" (UID: "68358186-3b13-493a-9141-c206629af46e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.258581 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-util\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.258642 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/68358186-3b13-493a-9141-c206629af46e-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.258665 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6smw\" (UniqueName: \"kubernetes.io/projected/68358186-3b13-493a-9141-c206629af46e-kube-api-access-j6smw\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.769659 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" event={"ID":"68358186-3b13-493a-9141-c206629af46e","Type":"ContainerDied","Data":"f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102"} Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.769726 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f18d7bcdbef7cc2848f6400e581b711641602dcf44b0515c2bf081aa68cd1102" Feb 02 07:01:14 crc kubenswrapper[4842]: I0202 07:01:14.769740 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.582578 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp"] Feb 02 07:01:19 crc kubenswrapper[4842]: E0202 07:01:19.583382 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="pull" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583396 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="pull" Feb 02 07:01:19 crc kubenswrapper[4842]: E0202 07:01:19.583417 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="util" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583423 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="util" Feb 02 07:01:19 crc kubenswrapper[4842]: E0202 07:01:19.583432 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="extract" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583438 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="extract" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.583558 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="68358186-3b13-493a-9141-c206629af46e" containerName="extract" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.584017 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.585907 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-mq2bc" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.586097 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.589530 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.642288 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp"] Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.658530 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrlg\" (UniqueName: \"kubernetes.io/projected/c8aa6122-bb1d-4642-b85f-18a2775e7c64-kube-api-access-rbrlg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.658625 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8aa6122-bb1d-4642-b85f-18a2775e7c64-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.760410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8aa6122-bb1d-4642-b85f-18a2775e7c64-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.760478 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrlg\" (UniqueName: \"kubernetes.io/projected/c8aa6122-bb1d-4642-b85f-18a2775e7c64-kube-api-access-rbrlg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.760986 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c8aa6122-bb1d-4642-b85f-18a2775e7c64-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.784217 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrlg\" (UniqueName: \"kubernetes.io/projected/c8aa6122-bb1d-4642-b85f-18a2775e7c64-kube-api-access-rbrlg\") pod \"cert-manager-operator-controller-manager-66c8bdd694-x6kcp\" (UID: \"c8aa6122-bb1d-4642-b85f-18a2775e7c64\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:19 crc kubenswrapper[4842]: I0202 07:01:19.898801 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" Feb 02 07:01:20 crc kubenswrapper[4842]: I0202 07:01:20.128817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp"] Feb 02 07:01:20 crc kubenswrapper[4842]: W0202 07:01:20.137384 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8aa6122_bb1d_4642_b85f_18a2775e7c64.slice/crio-4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e WatchSource:0}: Error finding container 4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e: Status 404 returned error can't find the container with id 4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e Feb 02 07:01:20 crc kubenswrapper[4842]: I0202 07:01:20.809120 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" event={"ID":"c8aa6122-bb1d-4642-b85f-18a2775e7c64","Type":"ContainerStarted","Data":"4e1950982c00680b01bb67cbc33fad7853db2861e66552628f27584be941f17e"} Feb 02 07:01:22 crc kubenswrapper[4842]: I0202 07:01:22.825037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" event={"ID":"c8aa6122-bb1d-4642-b85f-18a2775e7c64","Type":"ContainerStarted","Data":"88ef9cf7369e4a80ccc0386bc1f82b40331c63c169fa0c87717acc9c5652261d"} Feb 02 07:01:22 crc kubenswrapper[4842]: I0202 07:01:22.842847 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-x6kcp" podStartSLOduration=1.396516804 podStartE2EDuration="3.842830375s" podCreationTimestamp="2026-02-02 07:01:19 +0000 UTC" firstStartedPulling="2026-02-02 07:01:20.139677237 +0000 UTC m=+905.516945149" lastFinishedPulling="2026-02-02 07:01:22.585990808 +0000 UTC m=+907.963258720" observedRunningTime="2026-02-02 07:01:22.838698113 +0000 UTC m=+908.215966035" watchObservedRunningTime="2026-02-02 07:01:22.842830375 +0000 UTC m=+908.220098287" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.682390 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hj9fx"] Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.684520 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.691537 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.691636 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.691725 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mt7wl" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.698206 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hj9fx"] Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.770200 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.770460 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lwr\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-kube-api-access-p5lwr\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.872029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lwr\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-kube-api-access-p5lwr\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.872215 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.912519 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:27 crc kubenswrapper[4842]: I0202 07:01:27.922314 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lwr\" (UniqueName: \"kubernetes.io/projected/466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9-kube-api-access-p5lwr\") pod \"cert-manager-webhook-6888856db4-hj9fx\" (UID: \"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9\") " pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.010856 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.515868 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-j6288"] Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.516937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.521796 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qppzd" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.537485 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-j6288"] Feb 02 07:01:28 crc kubenswrapper[4842]: W0202 07:01:28.539486 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466ec5f5_a1b9_439d_a9d6_d5dbbe8d16c9.slice/crio-367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112 WatchSource:0}: Error finding container 367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112: Status 404 returned error can't find the container with id 367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112 Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.549571 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hj9fx"] Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.594399 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.594470 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpfc\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-kube-api-access-twpfc\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.695731 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.695779 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpfc\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-kube-api-access-twpfc\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.715011 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.715564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpfc\" (UniqueName: \"kubernetes.io/projected/d7710841-a6c0-41ce-a408-f5940ab76922-kube-api-access-twpfc\") pod \"cert-manager-cainjector-5545bd876-j6288\" (UID: \"d7710841-a6c0-41ce-a408-f5940ab76922\") " pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.834036 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" Feb 02 07:01:28 crc kubenswrapper[4842]: I0202 07:01:28.884724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" event={"ID":"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9","Type":"ContainerStarted","Data":"367e0cd12f3606c7d0f457767139838217f35ee03fbf4052d2d08e2a6e49d112"} Feb 02 07:01:29 crc kubenswrapper[4842]: I0202 07:01:29.135426 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-j6288"] Feb 02 07:01:29 crc kubenswrapper[4842]: W0202 07:01:29.138172 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7710841_a6c0_41ce_a408_f5940ab76922.slice/crio-cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2 WatchSource:0}: Error finding container cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2: Status 404 returned error can't find the container with id cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2 Feb 02 07:01:29 crc kubenswrapper[4842]: I0202 07:01:29.893130 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" event={"ID":"d7710841-a6c0-41ce-a408-f5940ab76922","Type":"ContainerStarted","Data":"cb3c426e42420706431002f6a7461fa16f62c4fc3c5bab220de53f4bb34144b2"} Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.920100 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" event={"ID":"d7710841-a6c0-41ce-a408-f5940ab76922","Type":"ContainerStarted","Data":"c69e254b8d125d2465d2054062400003c2056193d2c7f1e597b8d202c9475790"} Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.922904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" event={"ID":"466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9","Type":"ContainerStarted","Data":"2d2659bcd7c0355849fbe98b1acb7681fddf44409d9ddd7f85f0b53858a32f6c"} Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.923068 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:32 crc kubenswrapper[4842]: I0202 07:01:32.940120 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-j6288" podStartSLOduration=1.5371201970000001 podStartE2EDuration="4.940094403s" podCreationTimestamp="2026-02-02 07:01:28 +0000 UTC" firstStartedPulling="2026-02-02 07:01:29.141288936 +0000 UTC m=+914.518556848" lastFinishedPulling="2026-02-02 07:01:32.544263152 +0000 UTC m=+917.921531054" observedRunningTime="2026-02-02 07:01:32.937113229 +0000 UTC m=+918.314381161" watchObservedRunningTime="2026-02-02 07:01:32.940094403 +0000 UTC m=+918.317362335" Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.906609 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" podStartSLOduration=5.92537136 podStartE2EDuration="9.90658345s" podCreationTimestamp="2026-02-02 07:01:27 +0000 UTC" firstStartedPulling="2026-02-02 07:01:28.543867179 +0000 UTC m=+913.921135111" lastFinishedPulling="2026-02-02 07:01:32.525079269 +0000 UTC m=+917.902347201" observedRunningTime="2026-02-02 07:01:32.97571936 +0000 UTC m=+918.352987292" watchObservedRunningTime="2026-02-02 07:01:36.90658345 +0000 UTC m=+922.283851392" Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.910400 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.912181 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:36 crc kubenswrapper[4842]: I0202 07:01:36.939609 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.051904 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.051963 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.052153 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.153664 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.153719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.153755 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.154260 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.154362 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.175617 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"community-operators-9x2pr\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.243866 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.519502 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.955176 4842 generic.go:334] "Generic (PLEG): container finished" podID="548f8a7f-3f38-498d-999a-96753854d869" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" exitCode=0 Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.955276 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150"} Feb 02 07:01:37 crc kubenswrapper[4842]: I0202 07:01:37.957374 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerStarted","Data":"ce6dfceaa02df9a199ab688a09ffe666908265586305bd31da204ee7ec4758f8"} Feb 02 07:01:38 crc kubenswrapper[4842]: I0202 07:01:38.013554 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hj9fx" Feb 02 07:01:38 crc kubenswrapper[4842]: I0202 07:01:38.963922 4842 generic.go:334] "Generic (PLEG): container finished" podID="548f8a7f-3f38-498d-999a-96753854d869" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" exitCode=0 Feb 02 07:01:38 crc kubenswrapper[4842]: I0202 07:01:38.964275 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8"} Feb 02 07:01:39 crc kubenswrapper[4842]: E0202 07:01:39.019422 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548f8a7f_3f38_498d_999a_96753854d869.slice/crio-e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548f8a7f_3f38_498d_999a_96753854d869.slice/crio-conmon-e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:01:39 crc kubenswrapper[4842]: I0202 07:01:39.973302 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerStarted","Data":"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2"} Feb 02 07:01:39 crc kubenswrapper[4842]: I0202 07:01:39.991637 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9x2pr" podStartSLOduration=2.588904831 podStartE2EDuration="3.991621874s" podCreationTimestamp="2026-02-02 07:01:36 +0000 UTC" firstStartedPulling="2026-02-02 07:01:37.95720413 +0000 UTC m=+923.334472042" lastFinishedPulling="2026-02-02 07:01:39.359921163 +0000 UTC m=+924.737189085" observedRunningTime="2026-02-02 07:01:39.990466136 +0000 UTC m=+925.367734088" watchObservedRunningTime="2026-02-02 07:01:39.991621874 +0000 UTC m=+925.368889786" Feb 02 07:01:41 crc kubenswrapper[4842]: I0202 07:01:41.873499 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:01:41 crc kubenswrapper[4842]: I0202 07:01:41.874939 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:41 crc kubenswrapper[4842]: I0202 07:01:41.890308 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.032131 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.032440 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.032477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133042 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133100 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133133 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.133798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.145733 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.145815 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.145895 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.146758 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.146861 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93" gracePeriod=600 Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.157055 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"certified-operators-ltrf2\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.191245 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:42 crc kubenswrapper[4842]: I0202 07:01:42.530171 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:42.999884 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93" exitCode=0 Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.000282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93"} Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.000325 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62"} Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.000352 4842 scope.go:117] "RemoveContainer" containerID="75f797a8d8f9d999a2baca9e47391a8e34aa160a2187acfaf76eee81d7b0ee62" Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.005617 4842 generic.go:334] "Generic (PLEG): container finished" podID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerID="331609fa40669bd7840b308a9666007c56af8aa738cc0b311b0bd226734f37d3" exitCode=0 Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.005685 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"331609fa40669bd7840b308a9666007c56af8aa738cc0b311b0bd226734f37d3"} Feb 02 07:01:43 crc kubenswrapper[4842]: I0202 07:01:43.005725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerStarted","Data":"10d6bb6305708264d17e0f259712182618a08b4e23d2fdb9d6c3dec64e76c9e2"} Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.018723 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerStarted","Data":"5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108"} Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.212311 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-446xj"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.214625 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.218684 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-446xj"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.221806 4842 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n97wc" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.273975 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgkg\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-kube-api-access-rkgkg\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.274041 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-bound-sa-token\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.375034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgkg\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-kube-api-access-rkgkg\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.375210 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-bound-sa-token\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.394178 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgkg\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-kube-api-access-rkgkg\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.394283 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ffbe6b41-d1da-4aec-bbfd-376c2f53a962-bound-sa-token\") pod \"cert-manager-545d4d4674-446xj\" (UID: \"ffbe6b41-d1da-4aec-bbfd-376c2f53a962\") " pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.530638 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-446xj" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.812807 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-446xj"] Feb 02 07:01:44 crc kubenswrapper[4842]: W0202 07:01:44.816996 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffbe6b41_d1da_4aec_bbfd_376c2f53a962.slice/crio-0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e WatchSource:0}: Error finding container 0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e: Status 404 returned error can't find the container with id 0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.878814 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.880200 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.893804 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.983998 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.984347 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:44 crc kubenswrapper[4842]: I0202 07:01:44.984405 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.027173 4842 generic.go:334] "Generic (PLEG): container finished" podID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerID="5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108" exitCode=0 Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.027263 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.027308 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerStarted","Data":"18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.028799 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-446xj" event={"ID":"ffbe6b41-d1da-4aec-bbfd-376c2f53a962","Type":"ContainerStarted","Data":"9682e516fdd55be77931ab601a32dc9b2a374c2ff0e637c3453756d90a6a4093"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.028834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-446xj" event={"ID":"ffbe6b41-d1da-4aec-bbfd-376c2f53a962","Type":"ContainerStarted","Data":"0969521bc1cbf0ff59b56fc155eebe48632bcfe80b5c83a6367c40beba537a8e"} Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.049455 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ltrf2" podStartSLOduration=2.4133591819999998 podStartE2EDuration="4.049436084s" podCreationTimestamp="2026-02-02 07:01:41 +0000 UTC" firstStartedPulling="2026-02-02 07:01:43.007817923 +0000 UTC m=+928.385085875" lastFinishedPulling="2026-02-02 07:01:44.643894865 +0000 UTC m=+930.021162777" observedRunningTime="2026-02-02 07:01:45.045578789 +0000 UTC m=+930.422846711" watchObservedRunningTime="2026-02-02 07:01:45.049436084 +0000 UTC m=+930.426704006" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.065971 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-446xj" podStartSLOduration=1.065953381 podStartE2EDuration="1.065953381s" podCreationTimestamp="2026-02-02 07:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:01:45.061120822 +0000 UTC m=+930.438388754" watchObservedRunningTime="2026-02-02 07:01:45.065953381 +0000 UTC m=+930.443221293" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.085559 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.085613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.085642 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.086280 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.086356 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.104632 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"redhat-marketplace-xqgvd\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.202523 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:45 crc kubenswrapper[4842]: I0202 07:01:45.413488 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:01:45 crc kubenswrapper[4842]: W0202 07:01:45.420729 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0958b9f3_ea26_4013_9a68_3cf94fa2b557.slice/crio-e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4 WatchSource:0}: Error finding container e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4: Status 404 returned error can't find the container with id e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4 Feb 02 07:01:46 crc kubenswrapper[4842]: I0202 07:01:46.038578 4842 generic.go:334] "Generic (PLEG): container finished" podID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerID="9a1b5c3686f5d2f888d619760c2c6f065e2cfcbdb7a7c316780928bdc983a404" exitCode=0 Feb 02 07:01:46 crc kubenswrapper[4842]: I0202 07:01:46.038626 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"9a1b5c3686f5d2f888d619760c2c6f065e2cfcbdb7a7c316780928bdc983a404"} Feb 02 07:01:46 crc kubenswrapper[4842]: I0202 07:01:46.038678 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerStarted","Data":"e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4"} Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.049814 4842 generic.go:334] "Generic (PLEG): container finished" podID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerID="6562c4eba25712b89a5e8c0ada8a664aed0995c58fefc7d0c3c227145bba8a32" exitCode=0 Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.050451 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"6562c4eba25712b89a5e8c0ada8a664aed0995c58fefc7d0c3c227145bba8a32"} Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.244337 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.244418 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:47 crc kubenswrapper[4842]: I0202 07:01:47.308865 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:48 crc kubenswrapper[4842]: I0202 07:01:48.061865 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerStarted","Data":"4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf"} Feb 02 07:01:48 crc kubenswrapper[4842]: I0202 07:01:48.123129 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:48 crc kubenswrapper[4842]: I0202 07:01:48.149673 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xqgvd" podStartSLOduration=2.699414389 podStartE2EDuration="4.149649883s" podCreationTimestamp="2026-02-02 07:01:44 +0000 UTC" firstStartedPulling="2026-02-02 07:01:46.04101153 +0000 UTC m=+931.418279442" lastFinishedPulling="2026-02-02 07:01:47.491247014 +0000 UTC m=+932.868514936" observedRunningTime="2026-02-02 07:01:48.083564405 +0000 UTC m=+933.460832327" watchObservedRunningTime="2026-02-02 07:01:48.149649883 +0000 UTC m=+933.526917815" Feb 02 07:01:50 crc kubenswrapper[4842]: I0202 07:01:50.265591 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:50 crc kubenswrapper[4842]: I0202 07:01:50.266245 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9x2pr" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" containerID="cri-o://a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" gracePeriod=2 Feb 02 07:01:50 crc kubenswrapper[4842]: I0202 07:01:50.967334 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.072104 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") pod \"548f8a7f-3f38-498d-999a-96753854d869\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.072191 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") pod \"548f8a7f-3f38-498d-999a-96753854d869\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.072299 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") pod \"548f8a7f-3f38-498d-999a-96753854d869\" (UID: \"548f8a7f-3f38-498d-999a-96753854d869\") " Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.073161 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities" (OuterVolumeSpecName: "utilities") pod "548f8a7f-3f38-498d-999a-96753854d869" (UID: "548f8a7f-3f38-498d-999a-96753854d869"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.083409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr" (OuterVolumeSpecName: "kube-api-access-cjtkr") pod "548f8a7f-3f38-498d-999a-96753854d869" (UID: "548f8a7f-3f38-498d-999a-96753854d869"). InnerVolumeSpecName "kube-api-access-cjtkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098312 4842 generic.go:334] "Generic (PLEG): container finished" podID="548f8a7f-3f38-498d-999a-96753854d869" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" exitCode=0 Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098388 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x2pr" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098374 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2"} Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098480 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x2pr" event={"ID":"548f8a7f-3f38-498d-999a-96753854d869","Type":"ContainerDied","Data":"ce6dfceaa02df9a199ab688a09ffe666908265586305bd31da204ee7ec4758f8"} Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.098552 4842 scope.go:117] "RemoveContainer" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.133529 4842 scope.go:117] "RemoveContainer" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.150035 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "548f8a7f-3f38-498d-999a-96753854d869" (UID: "548f8a7f-3f38-498d-999a-96753854d869"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.161511 4842 scope.go:117] "RemoveContainer" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.174268 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.174310 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548f8a7f-3f38-498d-999a-96753854d869-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.174324 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjtkr\" (UniqueName: \"kubernetes.io/projected/548f8a7f-3f38-498d-999a-96753854d869-kube-api-access-cjtkr\") on node \"crc\" DevicePath \"\"" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.185467 4842 scope.go:117] "RemoveContainer" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" Feb 02 07:01:51 crc kubenswrapper[4842]: E0202 07:01:51.186328 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2\": container with ID starting with a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2 not found: ID does not exist" containerID="a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186385 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2"} err="failed to get container status \"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2\": rpc error: code = NotFound desc = could not find container \"a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2\": container with ID starting with a98d41c6f99ea7e40f7729326ae77423d9f4923ba69dc78c96d670c40dcc93b2 not found: ID does not exist" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186411 4842 scope.go:117] "RemoveContainer" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" Feb 02 07:01:51 crc kubenswrapper[4842]: E0202 07:01:51.186908 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8\": container with ID starting with e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8 not found: ID does not exist" containerID="e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186929 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8"} err="failed to get container status \"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8\": rpc error: code = NotFound desc = could not find container \"e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8\": container with ID starting with e66cbb71a2dedb397c7fc4b4685876616e2f695ee571fb00c881428409fca0e8 not found: ID does not exist" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.186941 4842 scope.go:117] "RemoveContainer" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" Feb 02 07:01:51 crc kubenswrapper[4842]: E0202 07:01:51.187398 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150\": container with ID starting with 9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150 not found: ID does not exist" containerID="9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.187445 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150"} err="failed to get container status \"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150\": rpc error: code = NotFound desc = could not find container \"9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150\": container with ID starting with 9ef439958063543ffefcc622a6723ccbf9efbfd50a080c5c02fc8a278f317150 not found: ID does not exist" Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.445031 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:51 crc kubenswrapper[4842]: I0202 07:01:51.445072 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9x2pr"] Feb 02 07:01:52 crc kubenswrapper[4842]: I0202 07:01:52.192341 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:52 crc kubenswrapper[4842]: I0202 07:01:52.192698 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:52 crc kubenswrapper[4842]: I0202 07:01:52.243577 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:53 crc kubenswrapper[4842]: I0202 07:01:53.167966 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:01:53 crc kubenswrapper[4842]: I0202 07:01:53.447050 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548f8a7f-3f38-498d-999a-96753854d869" path="/var/lib/kubelet/pods/548f8a7f-3f38-498d-999a-96753854d869/volumes" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.884360 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5549s"] Feb 02 07:01:54 crc kubenswrapper[4842]: E0202 07:01:54.885034 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-utilities" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885056 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-utilities" Feb 02 07:01:54 crc kubenswrapper[4842]: E0202 07:01:54.885072 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-content" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885086 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="extract-content" Feb 02 07:01:54 crc kubenswrapper[4842]: E0202 07:01:54.885120 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885137 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885368 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="548f8a7f-3f38-498d-999a-96753854d869" containerName="registry-server" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.885989 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.890661 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kf99s" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.893307 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.894302 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 02 07:01:54 crc kubenswrapper[4842]: I0202 07:01:54.896002 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5549s"] Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.074963 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wgz\" (UniqueName: \"kubernetes.io/projected/e2e2a93a-9c50-4769-9983-e51f49c374d5-kube-api-access-49wgz\") pod \"openstack-operator-index-5549s\" (UID: \"e2e2a93a-9c50-4769-9983-e51f49c374d5\") " pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.177462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49wgz\" (UniqueName: \"kubernetes.io/projected/e2e2a93a-9c50-4769-9983-e51f49c374d5-kube-api-access-49wgz\") pod \"openstack-operator-index-5549s\" (UID: \"e2e2a93a-9c50-4769-9983-e51f49c374d5\") " pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.202920 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.203000 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.212389 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49wgz\" (UniqueName: \"kubernetes.io/projected/e2e2a93a-9c50-4769-9983-e51f49c374d5-kube-api-access-49wgz\") pod \"openstack-operator-index-5549s\" (UID: \"e2e2a93a-9c50-4769-9983-e51f49c374d5\") " pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.256598 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.279853 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:55 crc kubenswrapper[4842]: I0202 07:01:55.852975 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5549s"] Feb 02 07:01:56 crc kubenswrapper[4842]: I0202 07:01:56.138968 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5549s" event={"ID":"e2e2a93a-9c50-4769-9983-e51f49c374d5","Type":"ContainerStarted","Data":"6ebc9d493ba802278e6c55edff41e45d01f39c4caf4d74970fd717b7f0ed0959"} Feb 02 07:01:56 crc kubenswrapper[4842]: I0202 07:01:56.196691 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:01:57 crc kubenswrapper[4842]: I0202 07:01:57.149886 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5549s" event={"ID":"e2e2a93a-9c50-4769-9983-e51f49c374d5","Type":"ContainerStarted","Data":"a4054be2e6e6ef664dba1de9f7b1dfddf7e3cc36663ab73d6a99d202958ffae2"} Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.071956 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5549s" podStartSLOduration=5.373697913 podStartE2EDuration="6.071931043s" podCreationTimestamp="2026-02-02 07:01:54 +0000 UTC" firstStartedPulling="2026-02-02 07:01:55.851670645 +0000 UTC m=+941.228938557" lastFinishedPulling="2026-02-02 07:01:56.549903775 +0000 UTC m=+941.927171687" observedRunningTime="2026-02-02 07:01:57.171930628 +0000 UTC m=+942.549198610" watchObservedRunningTime="2026-02-02 07:02:00.071931043 +0000 UTC m=+945.449198985" Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.073472 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.074007 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ltrf2" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" containerID="cri-o://18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6" gracePeriod=2 Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.473606 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:02:00 crc kubenswrapper[4842]: I0202 07:02:00.475002 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xqgvd" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" containerID="cri-o://4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf" gracePeriod=2 Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.208933 4842 generic.go:334] "Generic (PLEG): container finished" podID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerID="18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6" exitCode=0 Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.209003 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6"} Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.211654 4842 generic.go:334] "Generic (PLEG): container finished" podID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerID="4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf" exitCode=0 Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.211686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf"} Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.318114 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.322015 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.409155 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") pod \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.409264 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") pod \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.415297 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj" (OuterVolumeSpecName: "kube-api-access-vdpzj") pod "0958b9f3-ea26-4013-9a68-3cf94fa2b557" (UID: "0958b9f3-ea26-4013-9a68-3cf94fa2b557"). InnerVolumeSpecName "kube-api-access-vdpzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.481055 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80cf1b43-3437-4ef8-b9c7-a8bd77270228" (UID: "80cf1b43-3437-4ef8-b9c7-a8bd77270228"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515640 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") pod \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515740 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") pod \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\" (UID: \"0958b9f3-ea26-4013-9a68-3cf94fa2b557\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515776 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") pod \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.515803 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") pod \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\" (UID: \"80cf1b43-3437-4ef8-b9c7-a8bd77270228\") " Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.516314 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdpzj\" (UniqueName: \"kubernetes.io/projected/0958b9f3-ea26-4013-9a68-3cf94fa2b557-kube-api-access-vdpzj\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.516339 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.516626 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities" (OuterVolumeSpecName: "utilities") pod "0958b9f3-ea26-4013-9a68-3cf94fa2b557" (UID: "0958b9f3-ea26-4013-9a68-3cf94fa2b557"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.517914 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities" (OuterVolumeSpecName: "utilities") pod "80cf1b43-3437-4ef8-b9c7-a8bd77270228" (UID: "80cf1b43-3437-4ef8-b9c7-a8bd77270228"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.522888 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm" (OuterVolumeSpecName: "kube-api-access-l7trm") pod "80cf1b43-3437-4ef8-b9c7-a8bd77270228" (UID: "80cf1b43-3437-4ef8-b9c7-a8bd77270228"). InnerVolumeSpecName "kube-api-access-l7trm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.536501 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0958b9f3-ea26-4013-9a68-3cf94fa2b557" (UID: "0958b9f3-ea26-4013-9a68-3cf94fa2b557"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617376 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cf1b43-3437-4ef8-b9c7-a8bd77270228-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617433 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7trm\" (UniqueName: \"kubernetes.io/projected/80cf1b43-3437-4ef8-b9c7-a8bd77270228-kube-api-access-l7trm\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617458 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:01 crc kubenswrapper[4842]: I0202 07:02:01.617476 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0958b9f3-ea26-4013-9a68-3cf94fa2b557-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.224467 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xqgvd" event={"ID":"0958b9f3-ea26-4013-9a68-3cf94fa2b557","Type":"ContainerDied","Data":"e9e4e405947ab052bd7a0b77475b906abce643a4251b0c881206046268bc25b4"} Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.224545 4842 scope.go:117] "RemoveContainer" containerID="4192676feafbcbc6ca121e46aa534c0ceaaf73d1dd6f36b6528914037c4f83bf" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.224718 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xqgvd" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.233169 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ltrf2" event={"ID":"80cf1b43-3437-4ef8-b9c7-a8bd77270228","Type":"ContainerDied","Data":"10d6bb6305708264d17e0f259712182618a08b4e23d2fdb9d6c3dec64e76c9e2"} Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.233275 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ltrf2" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.264591 4842 scope.go:117] "RemoveContainer" containerID="6562c4eba25712b89a5e8c0ada8a664aed0995c58fefc7d0c3c227145bba8a32" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.292810 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.304461 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xqgvd"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.313621 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.316613 4842 scope.go:117] "RemoveContainer" containerID="9a1b5c3686f5d2f888d619760c2c6f065e2cfcbdb7a7c316780928bdc983a404" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.321451 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ltrf2"] Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.341542 4842 scope.go:117] "RemoveContainer" containerID="18aeb459fdeac67d76d40df4822fb79462c6686bd06d747776d24de4f55ddec6" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.369041 4842 scope.go:117] "RemoveContainer" containerID="5aaf954be58c33d0b0d73bce7116e84abb016b1ce966f94de9fa66d4258dc108" Feb 02 07:02:02 crc kubenswrapper[4842]: I0202 07:02:02.392440 4842 scope.go:117] "RemoveContainer" containerID="331609fa40669bd7840b308a9666007c56af8aa738cc0b311b0bd226734f37d3" Feb 02 07:02:03 crc kubenswrapper[4842]: I0202 07:02:03.448029 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" path="/var/lib/kubelet/pods/0958b9f3-ea26-4013-9a68-3cf94fa2b557/volumes" Feb 02 07:02:03 crc kubenswrapper[4842]: I0202 07:02:03.450675 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" path="/var/lib/kubelet/pods/80cf1b43-3437-4ef8-b9c7-a8bd77270228/volumes" Feb 02 07:02:05 crc kubenswrapper[4842]: I0202 07:02:05.257128 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:05 crc kubenswrapper[4842]: I0202 07:02:05.257200 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:05 crc kubenswrapper[4842]: I0202 07:02:05.313710 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:06 crc kubenswrapper[4842]: I0202 07:02:06.316715 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5549s" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.344979 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr"] Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.345967 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.345996 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346014 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346027 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346048 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346062 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346080 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346121 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346144 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346158 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="extract-content" Feb 02 07:02:10 crc kubenswrapper[4842]: E0202 07:02:10.346186 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346200 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="extract-utilities" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346448 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0958b9f3-ea26-4013-9a68-3cf94fa2b557" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.346469 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cf1b43-3437-4ef8-b9c7-a8bd77270228" containerName="registry-server" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.347987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.351509 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-pxhx2" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.364708 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr"] Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.460319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.460441 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.460491 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.561837 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.561919 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.561942 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.562677 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.562963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.580978 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.680057 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:10 crc kubenswrapper[4842]: I0202 07:02:10.915080 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr"] Feb 02 07:02:11 crc kubenswrapper[4842]: I0202 07:02:11.329976 4842 generic.go:334] "Generic (PLEG): container finished" podID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerID="1a5f35b5a4eb71f9bb5798da2dcdf06862b34028bed5081306f93a56b70bc26e" exitCode=0 Feb 02 07:02:11 crc kubenswrapper[4842]: I0202 07:02:11.330016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"1a5f35b5a4eb71f9bb5798da2dcdf06862b34028bed5081306f93a56b70bc26e"} Feb 02 07:02:11 crc kubenswrapper[4842]: I0202 07:02:11.330043 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerStarted","Data":"429c116ca0225b38ad58e782d7cbf54cac7094f15ae8eb654edf041be3e18bed"} Feb 02 07:02:12 crc kubenswrapper[4842]: I0202 07:02:12.345836 4842 generic.go:334] "Generic (PLEG): container finished" podID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerID="cd6c9e561c6952477b245b41b2a0ba4090b60c5bf07255d24ef3c826cb541957" exitCode=0 Feb 02 07:02:12 crc kubenswrapper[4842]: I0202 07:02:12.345906 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"cd6c9e561c6952477b245b41b2a0ba4090b60c5bf07255d24ef3c826cb541957"} Feb 02 07:02:13 crc kubenswrapper[4842]: I0202 07:02:13.359203 4842 generic.go:334] "Generic (PLEG): container finished" podID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerID="fae69b26f8c1f3300dec7ceb2b1e84f680325e0e32c2d237f63fd4132afa4921" exitCode=0 Feb 02 07:02:13 crc kubenswrapper[4842]: I0202 07:02:13.359374 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"fae69b26f8c1f3300dec7ceb2b1e84f680325e0e32c2d237f63fd4132afa4921"} Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.733085 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.829109 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") pod \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.829318 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") pod \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.829374 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") pod \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\" (UID: \"3d9034b5-b9d6-4e70-8cae-f6226cd41d78\") " Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.830093 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle" (OuterVolumeSpecName: "bundle") pod "3d9034b5-b9d6-4e70-8cae-f6226cd41d78" (UID: "3d9034b5-b9d6-4e70-8cae-f6226cd41d78"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.835123 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp" (OuterVolumeSpecName: "kube-api-access-dxwpp") pod "3d9034b5-b9d6-4e70-8cae-f6226cd41d78" (UID: "3d9034b5-b9d6-4e70-8cae-f6226cd41d78"). InnerVolumeSpecName "kube-api-access-dxwpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.842822 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util" (OuterVolumeSpecName: "util") pod "3d9034b5-b9d6-4e70-8cae-f6226cd41d78" (UID: "3d9034b5-b9d6-4e70-8cae-f6226cd41d78"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.930664 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxwpp\" (UniqueName: \"kubernetes.io/projected/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-kube-api-access-dxwpp\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.930713 4842 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:14 crc kubenswrapper[4842]: I0202 07:02:14.930731 4842 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3d9034b5-b9d6-4e70-8cae-f6226cd41d78-util\") on node \"crc\" DevicePath \"\"" Feb 02 07:02:15 crc kubenswrapper[4842]: I0202 07:02:15.381950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" event={"ID":"3d9034b5-b9d6-4e70-8cae-f6226cd41d78","Type":"ContainerDied","Data":"429c116ca0225b38ad58e782d7cbf54cac7094f15ae8eb654edf041be3e18bed"} Feb 02 07:02:15 crc kubenswrapper[4842]: I0202 07:02:15.382823 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="429c116ca0225b38ad58e782d7cbf54cac7094f15ae8eb654edf041be3e18bed" Feb 02 07:02:15 crc kubenswrapper[4842]: I0202 07:02:15.382166 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.805968 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg"] Feb 02 07:02:18 crc kubenswrapper[4842]: E0202 07:02:18.806743 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="extract" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.806765 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="extract" Feb 02 07:02:18 crc kubenswrapper[4842]: E0202 07:02:18.806786 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="pull" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.806799 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="pull" Feb 02 07:02:18 crc kubenswrapper[4842]: E0202 07:02:18.806832 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="util" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.806845 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="util" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.807119 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9034b5-b9d6-4e70-8cae-f6226cd41d78" containerName="extract" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.807924 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.809881 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mhlnc" Feb 02 07:02:18 crc kubenswrapper[4842]: I0202 07:02:18.842671 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg"] Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.003447 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vf7\" (UniqueName: \"kubernetes.io/projected/3081c94c-e2f4-48b5-90b5-8bcc58234a9b-kube-api-access-q2vf7\") pod \"openstack-operator-controller-init-757f46c65d-gfksg\" (UID: \"3081c94c-e2f4-48b5-90b5-8bcc58234a9b\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.104539 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vf7\" (UniqueName: \"kubernetes.io/projected/3081c94c-e2f4-48b5-90b5-8bcc58234a9b-kube-api-access-q2vf7\") pod \"openstack-operator-controller-init-757f46c65d-gfksg\" (UID: \"3081c94c-e2f4-48b5-90b5-8bcc58234a9b\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.126187 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vf7\" (UniqueName: \"kubernetes.io/projected/3081c94c-e2f4-48b5-90b5-8bcc58234a9b-kube-api-access-q2vf7\") pod \"openstack-operator-controller-init-757f46c65d-gfksg\" (UID: \"3081c94c-e2f4-48b5-90b5-8bcc58234a9b\") " pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.128622 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:19 crc kubenswrapper[4842]: I0202 07:02:19.610931 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg"] Feb 02 07:02:19 crc kubenswrapper[4842]: W0202 07:02:19.612680 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3081c94c_e2f4_48b5_90b5_8bcc58234a9b.slice/crio-175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15 WatchSource:0}: Error finding container 175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15: Status 404 returned error can't find the container with id 175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15 Feb 02 07:02:20 crc kubenswrapper[4842]: I0202 07:02:20.411080 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" event={"ID":"3081c94c-e2f4-48b5-90b5-8bcc58234a9b","Type":"ContainerStarted","Data":"175006f03505702041d9ab7483f6cfb54f80aeff903a34fc50d292d29a305e15"} Feb 02 07:02:24 crc kubenswrapper[4842]: I0202 07:02:24.439669 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" event={"ID":"3081c94c-e2f4-48b5-90b5-8bcc58234a9b","Type":"ContainerStarted","Data":"04da284eb78ef4e13742d90c23b7cae9c13bd64706fe2394cba8a2940b9fdb88"} Feb 02 07:02:24 crc kubenswrapper[4842]: I0202 07:02:24.440060 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:24 crc kubenswrapper[4842]: I0202 07:02:24.483188 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" podStartSLOduration=2.285298463 podStartE2EDuration="6.48315805s" podCreationTimestamp="2026-02-02 07:02:18 +0000 UTC" firstStartedPulling="2026-02-02 07:02:19.614816258 +0000 UTC m=+964.992084180" lastFinishedPulling="2026-02-02 07:02:23.812675815 +0000 UTC m=+969.189943767" observedRunningTime="2026-02-02 07:02:24.470942009 +0000 UTC m=+969.848209941" watchObservedRunningTime="2026-02-02 07:02:24.48315805 +0000 UTC m=+969.860426002" Feb 02 07:02:29 crc kubenswrapper[4842]: I0202 07:02:29.132927 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-757f46c65d-gfksg" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.714678 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.715906 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.718597 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nv75v" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.720305 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.721300 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.725273 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-r8bgn" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.726819 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.761724 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.762397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.765615 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p42lx" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.774177 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.774882 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.779608 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-prsht" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.780649 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.789526 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.792935 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.793905 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.796371 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x95cv" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.816250 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.829272 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.839274 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.839991 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.845431 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6g46d" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.858994 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894796 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67mrk\" (UniqueName: \"kubernetes.io/projected/79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1-kube-api-access-67mrk\") pod \"cinder-operator-controller-manager-8d874c8fc-jknjh\" (UID: \"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894829 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsgg\" (UniqueName: \"kubernetes.io/projected/bda41d33-cd37-4c4d-99d6-3808993000b4-kube-api-access-2vsgg\") pod \"designate-operator-controller-manager-6d9697b7f4-4hrlz\" (UID: \"bda41d33-cd37-4c4d-99d6-3808993000b4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8z8b\" (UniqueName: \"kubernetes.io/projected/bd7497e1-afb6-44b5-8270-1021f837a65a-kube-api-access-m8z8b\") pod \"glance-operator-controller-manager-8886f4c47-xq5nz\" (UID: \"bd7497e1-afb6-44b5-8270-1021f837a65a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894882 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvv9x\" (UniqueName: \"kubernetes.io/projected/17af9a3f-7823-4340-bebc-e50e11807467-kube-api-access-mvv9x\") pod \"heat-operator-controller-manager-69d6db494d-96sfj\" (UID: \"17af9a3f-7823-4340-bebc-e50e11807467\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.894903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtrk\" (UniqueName: \"kubernetes.io/projected/c679df42-e383-4a11-a50d-af9dbd4c4eb0-kube-api-access-sgtrk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-stkw6\" (UID: \"c679df42-e383-4a11-a50d-af9dbd4c4eb0\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.906305 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.906998 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.911483 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-867vq" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.911607 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.935917 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq"] Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.936878 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:48 crc kubenswrapper[4842]: I0202 07:02:48.951827 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pmtlr" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.000786 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002504 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67mrk\" (UniqueName: \"kubernetes.io/projected/79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1-kube-api-access-67mrk\") pod \"cinder-operator-controller-manager-8d874c8fc-jknjh\" (UID: \"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002664 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsgg\" (UniqueName: \"kubernetes.io/projected/bda41d33-cd37-4c4d-99d6-3808993000b4-kube-api-access-2vsgg\") pod \"designate-operator-controller-manager-6d9697b7f4-4hrlz\" (UID: \"bda41d33-cd37-4c4d-99d6-3808993000b4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8z8b\" (UniqueName: \"kubernetes.io/projected/bd7497e1-afb6-44b5-8270-1021f837a65a-kube-api-access-m8z8b\") pod \"glance-operator-controller-manager-8886f4c47-xq5nz\" (UID: \"bd7497e1-afb6-44b5-8270-1021f837a65a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002732 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002764 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvv9x\" (UniqueName: \"kubernetes.io/projected/17af9a3f-7823-4340-bebc-e50e11807467-kube-api-access-mvv9x\") pod \"heat-operator-controller-manager-69d6db494d-96sfj\" (UID: \"17af9a3f-7823-4340-bebc-e50e11807467\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29wq7\" (UniqueName: \"kubernetes.io/projected/95850a5b-9e70-4f77-86ee-ff016eae6e7e-kube-api-access-29wq7\") pod \"horizon-operator-controller-manager-5fb775575f-skdgw\" (UID: \"95850a5b-9e70-4f77-86ee-ff016eae6e7e\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002814 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgtrk\" (UniqueName: \"kubernetes.io/projected/c679df42-e383-4a11-a50d-af9dbd4c4eb0-kube-api-access-sgtrk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-stkw6\" (UID: \"c679df42-e383-4a11-a50d-af9dbd4c4eb0\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.002836 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7q94\" (UniqueName: \"kubernetes.io/projected/a020d6c0-e749-4442-93e8-64a4c463e9d5-kube-api-access-m7q94\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.039295 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.049414 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.050105 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.054154 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-m2vh5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.054683 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsgg\" (UniqueName: \"kubernetes.io/projected/bda41d33-cd37-4c4d-99d6-3808993000b4-kube-api-access-2vsgg\") pod \"designate-operator-controller-manager-6d9697b7f4-4hrlz\" (UID: \"bda41d33-cd37-4c4d-99d6-3808993000b4\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.067074 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvv9x\" (UniqueName: \"kubernetes.io/projected/17af9a3f-7823-4340-bebc-e50e11807467-kube-api-access-mvv9x\") pod \"heat-operator-controller-manager-69d6db494d-96sfj\" (UID: \"17af9a3f-7823-4340-bebc-e50e11807467\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.068872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8z8b\" (UniqueName: \"kubernetes.io/projected/bd7497e1-afb6-44b5-8270-1021f837a65a-kube-api-access-m8z8b\") pod \"glance-operator-controller-manager-8886f4c47-xq5nz\" (UID: \"bd7497e1-afb6-44b5-8270-1021f837a65a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.071423 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgtrk\" (UniqueName: \"kubernetes.io/projected/c679df42-e383-4a11-a50d-af9dbd4c4eb0-kube-api-access-sgtrk\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-stkw6\" (UID: \"c679df42-e383-4a11-a50d-af9dbd4c4eb0\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.071727 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.072648 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.081014 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gqnlp" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.083016 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.084650 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67mrk\" (UniqueName: \"kubernetes.io/projected/79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1-kube-api-access-67mrk\") pod \"cinder-operator-controller-manager-8d874c8fc-jknjh\" (UID: \"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.090399 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.094630 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.101274 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103870 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103915 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk5nf\" (UniqueName: \"kubernetes.io/projected/0222c7fe-6311-4445-bf7f-e43fcb5ec5f9-kube-api-access-xk5nf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-jmvqq\" (UID: \"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29wq7\" (UniqueName: \"kubernetes.io/projected/95850a5b-9e70-4f77-86ee-ff016eae6e7e-kube-api-access-29wq7\") pod \"horizon-operator-controller-manager-5fb775575f-skdgw\" (UID: \"95850a5b-9e70-4f77-86ee-ff016eae6e7e\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.103979 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7q94\" (UniqueName: \"kubernetes.io/projected/a020d6c0-e749-4442-93e8-64a4c463e9d5-kube-api-access-m7q94\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.104332 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.104375 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:49.604360771 +0000 UTC m=+994.981628683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.112383 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.113055 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.123420 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-qxp58" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.131974 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.152953 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7q94\" (UniqueName: \"kubernetes.io/projected/a020d6c0-e749-4442-93e8-64a4c463e9d5-kube-api-access-m7q94\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.167831 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29wq7\" (UniqueName: \"kubernetes.io/projected/95850a5b-9e70-4f77-86ee-ff016eae6e7e-kube-api-access-29wq7\") pod \"horizon-operator-controller-manager-5fb775575f-skdgw\" (UID: \"95850a5b-9e70-4f77-86ee-ff016eae6e7e\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.192969 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207167 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zw5\" (UniqueName: \"kubernetes.io/projected/bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece-kube-api-access-q8zw5\") pod \"mariadb-operator-controller-manager-67bf948998-nsf9v\" (UID: \"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207235 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdpzs\" (UniqueName: \"kubernetes.io/projected/590654af-c639-4e9d-b821-c6caa1016695-kube-api-access-jdpzs\") pod \"manila-operator-controller-manager-7dd968899f-kz2zn\" (UID: \"590654af-c639-4e9d-b821-c6caa1016695\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207279 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk5nf\" (UniqueName: \"kubernetes.io/projected/0222c7fe-6311-4445-bf7f-e43fcb5ec5f9-kube-api-access-xk5nf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-jmvqq\" (UID: \"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.207305 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j4bz\" (UniqueName: \"kubernetes.io/projected/46313c01-1f03-4185-b7c4-2da5420bd703-kube-api-access-8j4bz\") pod \"keystone-operator-controller-manager-84f48565d4-nzz4p\" (UID: \"46313c01-1f03-4185-b7c4-2da5420bd703\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.211274 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.212048 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.219545 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-gwb7k" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.229522 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.230325 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.243711 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vskbm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.247879 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk5nf\" (UniqueName: \"kubernetes.io/projected/0222c7fe-6311-4445-bf7f-e43fcb5ec5f9-kube-api-access-xk5nf\") pod \"ironic-operator-controller-manager-5f4b8bd54d-jmvqq\" (UID: \"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.249744 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.250588 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.265709 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5xbfp" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.268147 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.273527 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.274398 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.288327 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xp7ph" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310336 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j4bz\" (UniqueName: \"kubernetes.io/projected/46313c01-1f03-4185-b7c4-2da5420bd703-kube-api-access-8j4bz\") pod \"keystone-operator-controller-manager-84f48565d4-nzz4p\" (UID: \"46313c01-1f03-4185-b7c4-2da5420bd703\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310406 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2k7m\" (UniqueName: \"kubernetes.io/projected/60d10db6-9c42-471b-84fb-58e9c04c60fc-kube-api-access-w2k7m\") pod \"octavia-operator-controller-manager-6687f8d877-wpm9z\" (UID: \"60d10db6-9c42-471b-84fb-58e9c04c60fc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310443 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8ds8\" (UniqueName: \"kubernetes.io/projected/b7d68fac-cffb-4dd6-8c1b-4537a3a36571-kube-api-access-j8ds8\") pod \"nova-operator-controller-manager-55bff696bd-c9lwb\" (UID: \"b7d68fac-cffb-4dd6-8c1b-4537a3a36571\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310464 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8zw5\" (UniqueName: \"kubernetes.io/projected/bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece-kube-api-access-q8zw5\") pod \"mariadb-operator-controller-manager-67bf948998-nsf9v\" (UID: \"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310493 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdpzs\" (UniqueName: \"kubernetes.io/projected/590654af-c639-4e9d-b821-c6caa1016695-kube-api-access-jdpzs\") pod \"manila-operator-controller-manager-7dd968899f-kz2zn\" (UID: \"590654af-c639-4e9d-b821-c6caa1016695\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.310533 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdsnm\" (UniqueName: \"kubernetes.io/projected/95d96e63-61f2-4d8d-be72-562384cb6f23-kube-api-access-hdsnm\") pod \"neutron-operator-controller-manager-585dbc889-4zk9c\" (UID: \"95d96e63-61f2-4d8d-be72-562384cb6f23\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.322871 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.326601 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.333362 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j4bz\" (UniqueName: \"kubernetes.io/projected/46313c01-1f03-4185-b7c4-2da5420bd703-kube-api-access-8j4bz\") pod \"keystone-operator-controller-manager-84f48565d4-nzz4p\" (UID: \"46313c01-1f03-4185-b7c4-2da5420bd703\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.333718 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.334510 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.336877 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.337173 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4btph" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.338918 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.342449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8zw5\" (UniqueName: \"kubernetes.io/projected/bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece-kube-api-access-q8zw5\") pod \"mariadb-operator-controller-manager-67bf948998-nsf9v\" (UID: \"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.343724 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.352935 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdpzs\" (UniqueName: \"kubernetes.io/projected/590654af-c639-4e9d-b821-c6caa1016695-kube-api-access-jdpzs\") pod \"manila-operator-controller-manager-7dd968899f-kz2zn\" (UID: \"590654af-c639-4e9d-b821-c6caa1016695\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.358383 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.363574 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.364726 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.371393 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fxfcc" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.400104 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.411533 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8ds8\" (UniqueName: \"kubernetes.io/projected/b7d68fac-cffb-4dd6-8c1b-4537a3a36571-kube-api-access-j8ds8\") pod \"nova-operator-controller-manager-55bff696bd-c9lwb\" (UID: \"b7d68fac-cffb-4dd6-8c1b-4537a3a36571\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.411902 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.411940 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mn6p\" (UniqueName: \"kubernetes.io/projected/5e7a9701-ed45-4289-8272-f850efbf1e75-kube-api-access-9mn6p\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.412032 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdsnm\" (UniqueName: \"kubernetes.io/projected/95d96e63-61f2-4d8d-be72-562384cb6f23-kube-api-access-hdsnm\") pod \"neutron-operator-controller-manager-585dbc889-4zk9c\" (UID: \"95d96e63-61f2-4d8d-be72-562384cb6f23\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.412168 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2k7m\" (UniqueName: \"kubernetes.io/projected/60d10db6-9c42-471b-84fb-58e9c04c60fc-kube-api-access-w2k7m\") pod \"octavia-operator-controller-manager-6687f8d877-wpm9z\" (UID: \"60d10db6-9c42-471b-84fb-58e9c04c60fc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.412240 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znpcb\" (UniqueName: \"kubernetes.io/projected/255c38ec-b5b8-4017-94b8-93553884ed09-kube-api-access-znpcb\") pod \"ovn-operator-controller-manager-788c46999f-d8nns\" (UID: \"255c38ec-b5b8-4017-94b8-93553884ed09\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.426345 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.434372 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdsnm\" (UniqueName: \"kubernetes.io/projected/95d96e63-61f2-4d8d-be72-562384cb6f23-kube-api-access-hdsnm\") pod \"neutron-operator-controller-manager-585dbc889-4zk9c\" (UID: \"95d96e63-61f2-4d8d-be72-562384cb6f23\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.436272 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8ds8\" (UniqueName: \"kubernetes.io/projected/b7d68fac-cffb-4dd6-8c1b-4537a3a36571-kube-api-access-j8ds8\") pod \"nova-operator-controller-manager-55bff696bd-c9lwb\" (UID: \"b7d68fac-cffb-4dd6-8c1b-4537a3a36571\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.443410 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2k7m\" (UniqueName: \"kubernetes.io/projected/60d10db6-9c42-471b-84fb-58e9c04c60fc-kube-api-access-w2k7m\") pod \"octavia-operator-controller-manager-6687f8d877-wpm9z\" (UID: \"60d10db6-9c42-471b-84fb-58e9c04c60fc\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.453335 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.454012 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.456641 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4dg6v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.464542 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.501835 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.502681 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.504612 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6mzl2" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513314 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znpcb\" (UniqueName: \"kubernetes.io/projected/255c38ec-b5b8-4017-94b8-93553884ed09-kube-api-access-znpcb\") pod \"ovn-operator-controller-manager-788c46999f-d8nns\" (UID: \"255c38ec-b5b8-4017-94b8-93553884ed09\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513716 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxdq\" (UniqueName: \"kubernetes.io/projected/58dd3197-be46-474d-84f5-c066a9483a52-kube-api-access-fjxdq\") pod \"placement-operator-controller-manager-5b964cf4cd-qlxtv\" (UID: \"58dd3197-be46-474d-84f5-c066a9483a52\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513857 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgfkj\" (UniqueName: \"kubernetes.io/projected/6344fbd8-d71a-4461-ad9a-ad71e339ba03-kube-api-access-hgfkj\") pod \"swift-operator-controller-manager-68fc8c869-lbjfv\" (UID: \"6344fbd8-d71a-4461-ad9a-ad71e339ba03\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513899 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.513956 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mn6p\" (UniqueName: \"kubernetes.io/projected/5e7a9701-ed45-4289-8272-f850efbf1e75-kube-api-access-9mn6p\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.513984 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.514047 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.014028111 +0000 UTC m=+995.391296023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.524289 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.529809 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.530308 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znpcb\" (UniqueName: \"kubernetes.io/projected/255c38ec-b5b8-4017-94b8-93553884ed09-kube-api-access-znpcb\") pod \"ovn-operator-controller-manager-788c46999f-d8nns\" (UID: \"255c38ec-b5b8-4017-94b8-93553884ed09\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.533980 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mn6p\" (UniqueName: \"kubernetes.io/projected/5e7a9701-ed45-4289-8272-f850efbf1e75-kube-api-access-9mn6p\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.537957 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.544006 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.572319 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.573393 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.574630 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.577316 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-7bj8g" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.597861 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.609013 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617306 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgfkj\" (UniqueName: \"kubernetes.io/projected/6344fbd8-d71a-4461-ad9a-ad71e339ba03-kube-api-access-hgfkj\") pod \"swift-operator-controller-manager-68fc8c869-lbjfv\" (UID: \"6344fbd8-d71a-4461-ad9a-ad71e339ba03\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617367 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617496 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxdq\" (UniqueName: \"kubernetes.io/projected/58dd3197-be46-474d-84f5-c066a9483a52-kube-api-access-fjxdq\") pod \"placement-operator-controller-manager-5b964cf4cd-qlxtv\" (UID: \"58dd3197-be46-474d-84f5-c066a9483a52\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.617544 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszm7\" (UniqueName: \"kubernetes.io/projected/7db6967e-a602-49a0-83f6-e1caff831173-kube-api-access-rszm7\") pod \"telemetry-operator-controller-manager-64b5b76f97-q7vh6\" (UID: \"7db6967e-a602-49a0-83f6-e1caff831173\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.617955 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.618007 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.617992542 +0000 UTC m=+995.995260454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.633115 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.635765 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxdq\" (UniqueName: \"kubernetes.io/projected/58dd3197-be46-474d-84f5-c066a9483a52-kube-api-access-fjxdq\") pod \"placement-operator-controller-manager-5b964cf4cd-qlxtv\" (UID: \"58dd3197-be46-474d-84f5-c066a9483a52\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.640978 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgfkj\" (UniqueName: \"kubernetes.io/projected/6344fbd8-d71a-4461-ad9a-ad71e339ba03-kube-api-access-hgfkj\") pod \"swift-operator-controller-manager-68fc8c869-lbjfv\" (UID: \"6344fbd8-d71a-4461-ad9a-ad71e339ba03\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.647481 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.674047 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.711884 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.718760 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszm7\" (UniqueName: \"kubernetes.io/projected/7db6967e-a602-49a0-83f6-e1caff831173-kube-api-access-rszm7\") pod \"telemetry-operator-controller-manager-64b5b76f97-q7vh6\" (UID: \"7db6967e-a602-49a0-83f6-e1caff831173\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.718800 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hwl\" (UniqueName: \"kubernetes.io/projected/3fb9fda7-8167-4f3d-947b-3e002278ad99-kube-api-access-v5hwl\") pod \"test-operator-controller-manager-56f8bfcd9f-4q9m5\" (UID: \"3fb9fda7-8167-4f3d-947b-3e002278ad99\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.724314 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4ndxm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.725386 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.729074 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q4pcp" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.729455 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.741555 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4ndxm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.749231 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszm7\" (UniqueName: \"kubernetes.io/projected/7db6967e-a602-49a0-83f6-e1caff831173-kube-api-access-rszm7\") pod \"telemetry-operator-controller-manager-64b5b76f97-q7vh6\" (UID: \"7db6967e-a602-49a0-83f6-e1caff831173\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.768169 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.769529 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.771474 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.771593 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.773203 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j9fct" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.777919 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.778873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820669 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj64g\" (UniqueName: \"kubernetes.io/projected/de128384-b923-4536-a485-33e65a1b7e04-kube-api-access-sj64g\") pod \"watcher-operator-controller-manager-564965969-4ndxm\" (UID: \"de128384-b923-4536-a485-33e65a1b7e04\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820725 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820779 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hwl\" (UniqueName: \"kubernetes.io/projected/3fb9fda7-8167-4f3d-947b-3e002278ad99-kube-api-access-v5hwl\") pod \"test-operator-controller-manager-56f8bfcd9f-4q9m5\" (UID: \"3fb9fda7-8167-4f3d-947b-3e002278ad99\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820800 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrcg\" (UniqueName: \"kubernetes.io/projected/6b1810ad-df0b-44b5-8ba8-953039b85411-kube-api-access-hzrcg\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.820975 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.838046 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.839008 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.842162 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-75lch" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.842885 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hwl\" (UniqueName: \"kubernetes.io/projected/3fb9fda7-8167-4f3d-947b-3e002278ad99-kube-api-access-v5hwl\") pod \"test-operator-controller-manager-56f8bfcd9f-4q9m5\" (UID: \"3fb9fda7-8167-4f3d-947b-3e002278ad99\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.842981 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.860166 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.875296 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.884541 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.897357 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj"] Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.912277 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923826 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj64g\" (UniqueName: \"kubernetes.io/projected/de128384-b923-4536-a485-33e65a1b7e04-kube-api-access-sj64g\") pod \"watcher-operator-controller-manager-564965969-4ndxm\" (UID: \"de128384-b923-4536-a485-33e65a1b7e04\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923878 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923902 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrcg\" (UniqueName: \"kubernetes.io/projected/6b1810ad-df0b-44b5-8ba8-953039b85411-kube-api-access-hzrcg\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923963 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.923988 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl77c\" (UniqueName: \"kubernetes.io/projected/1fffe017-3a94-4565-9778-ccea208aa8cc-kube-api-access-xl77c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zbqhn\" (UID: \"1fffe017-3a94-4565-9778-ccea208aa8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924367 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924392 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924406 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.42439154 +0000 UTC m=+995.801659452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: E0202 07:02:49.924484 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:50.424456592 +0000 UTC m=+995.801724504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.945355 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj64g\" (UniqueName: \"kubernetes.io/projected/de128384-b923-4536-a485-33e65a1b7e04-kube-api-access-sj64g\") pod \"watcher-operator-controller-manager-564965969-4ndxm\" (UID: \"de128384-b923-4536-a485-33e65a1b7e04\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.950015 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrcg\" (UniqueName: \"kubernetes.io/projected/6b1810ad-df0b-44b5-8ba8-953039b85411-kube-api-access-hzrcg\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:49 crc kubenswrapper[4842]: I0202 07:02:49.964733 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6"] Feb 02 07:02:49 crc kubenswrapper[4842]: W0202 07:02:49.974953 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc679df42_e383_4a11_a50d_af9dbd4c4eb0.slice/crio-bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a WatchSource:0}: Error finding container bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a: Status 404 returned error can't find the container with id bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.002757 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.026093 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.026193 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl77c\" (UniqueName: \"kubernetes.io/projected/1fffe017-3a94-4565-9778-ccea208aa8cc-kube-api-access-xl77c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zbqhn\" (UID: \"1fffe017-3a94-4565-9778-ccea208aa8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.026586 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.026626 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:51.026612848 +0000 UTC m=+996.403880760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.061941 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl77c\" (UniqueName: \"kubernetes.io/projected/1fffe017-3a94-4565-9778-ccea208aa8cc-kube-api-access-xl77c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zbqhn\" (UID: \"1fffe017-3a94-4565-9778-ccea208aa8cc\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.072270 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.107976 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.137696 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0222c7fe_6311_4445_bf7f_e43fcb5ec5f9.slice/crio-34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b WatchSource:0}: Error finding container 34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b: Status 404 returned error can't find the container with id 34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.200900 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.435702 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.436172 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.435912 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.436264 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:51.436247679 +0000 UTC m=+996.813515591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.436392 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.436451 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:51.436434473 +0000 UTC m=+996.813702465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.515432 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.522488 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.534373 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.535366 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.546066 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.547181 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod590654af_c639_4e9d_b821_c6caa1016695.slice/crio-31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44 WatchSource:0}: Error finding container 31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44: Status 404 returned error can't find the container with id 31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44 Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.608657 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.608714 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod255c38ec_b5b8_4017_94b8_93553884ed09.slice/crio-8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830 WatchSource:0}: Error finding container 8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830: Status 404 returned error can't find the container with id 8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830 Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.610352 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d10db6_9c42_471b_84fb_58e9c04c60fc.slice/crio-7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23 WatchSource:0}: Error finding container 7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23: Status 404 returned error can't find the container with id 7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23 Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.614191 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfe64bf6_fea9_4b04_b4ff_74fe4b9c2ece.slice/crio-0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905 WatchSource:0}: Error finding container 0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905: Status 404 returned error can't find the container with id 0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905 Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.619301 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.625054 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z"] Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.638926 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" event={"ID":"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece","Type":"ContainerStarted","Data":"0c3a90cf4a939d9ae4e732d538175adbe65c88f7ab17378dac13b73ab664b905"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.640667 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" event={"ID":"bd7497e1-afb6-44b5-8270-1021f837a65a","Type":"ContainerStarted","Data":"3c03de3673e9d65cdc99b54b699a6399d710d9434194e90b9639ec136030d25d"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.642207 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" event={"ID":"46313c01-1f03-4185-b7c4-2da5420bd703","Type":"ContainerStarted","Data":"9f519153c63f4d593ed30e9c77236fe5bb587b497e279fa9c0d2e63e9697ef28"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.642869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.642974 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.643039 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:52.643021122 +0000 UTC m=+998.020289034 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.643155 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" event={"ID":"17af9a3f-7823-4340-bebc-e50e11807467","Type":"ContainerStarted","Data":"25c1217cd3dd79d04727016625e46be9d83e1e7c7a418748539a40f220891e63"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.644238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" event={"ID":"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1","Type":"ContainerStarted","Data":"31fe857424e518dc59c8d4f98cd6183be8851b78e815706f9f4844860fab74fb"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.645238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" event={"ID":"95d96e63-61f2-4d8d-be72-562384cb6f23","Type":"ContainerStarted","Data":"1da9fd5908869a15666046e0dbecdf5c1108fd3064ba24131f41af4670151ac7"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.646241 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" event={"ID":"bda41d33-cd37-4c4d-99d6-3808993000b4","Type":"ContainerStarted","Data":"aa46e9d4e396ca970079ecd6a3351ba8f4a5995de208c81cbeb836d9f5a06dd0"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.647062 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" event={"ID":"95850a5b-9e70-4f77-86ee-ff016eae6e7e","Type":"ContainerStarted","Data":"a0a91e07e908835f6e3ea0cb6f133a874ead96fe2a6f7b48fdee1f6c4a8a07ca"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.648553 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" event={"ID":"255c38ec-b5b8-4017-94b8-93553884ed09","Type":"ContainerStarted","Data":"8c3e31bf5ae071a7011ceaaa2903b81b2952361cac84a6e49244a9873ae82830"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.649403 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" event={"ID":"c679df42-e383-4a11-a50d-af9dbd4c4eb0","Type":"ContainerStarted","Data":"bc10e6ed006e4ba103656f85a5dd8ef40f7073a183bca2747d2f96837ce00b2a"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.650403 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" event={"ID":"60d10db6-9c42-471b-84fb-58e9c04c60fc","Type":"ContainerStarted","Data":"7f4e9c051a66af5fc910d29468d1143d028a4610ca9e1284ab19465b9be58d23"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.651722 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" event={"ID":"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9","Type":"ContainerStarted","Data":"34b37d4f5ec7248ec3e0f5402e96c8a604dad15cb56cd20d1c7edf7a407ac79b"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.653082 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" event={"ID":"590654af-c639-4e9d-b821-c6caa1016695","Type":"ContainerStarted","Data":"31e698a79489776b6eeff8812febf764bbeb34a202f6ccfef3c167d6f6c64b44"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.654370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" event={"ID":"b7d68fac-cffb-4dd6-8c1b-4537a3a36571","Type":"ContainerStarted","Data":"9cdbbc0b3c68ecb4483e024288faf64e636d49d33783d443c6db9d3f1ff28cfa"} Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.759741 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn"] Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.761202 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fffe017_3a94_4565_9778_ccea208aa8cc.slice/crio-9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98 WatchSource:0}: Error finding container 9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98: Status 404 returned error can't find the container with id 9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98 Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.763562 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xl77c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zbqhn_openstack-operators(1fffe017-3a94-4565-9778-ccea208aa8cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 07:02:50 crc kubenswrapper[4842]: I0202 07:02:50.764287 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv"] Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.764828 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podUID="1fffe017-3a94-4565-9778-ccea208aa8cc" Feb 02 07:02:50 crc kubenswrapper[4842]: W0202 07:02:50.765558 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58dd3197_be46_474d_84f5_c066a9483a52.slice/crio-83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434 WatchSource:0}: Error finding container 83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434: Status 404 returned error can't find the container with id 83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434 Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.768030 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjxdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-qlxtv_openstack-operators(58dd3197-be46-474d-84f5-c066a9483a52): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 02 07:02:50 crc kubenswrapper[4842]: E0202 07:02:50.769284 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podUID="58dd3197-be46-474d-84f5-c066a9483a52" Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.052174 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.052331 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.052388 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:53.052373836 +0000 UTC m=+998.429641748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.794050 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.794777 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.794913 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.794969 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:53.794948848 +0000 UTC m=+999.172216760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.795023 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.795049 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:53.795040571 +0000 UTC m=+999.172308483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.874185 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5"] Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.874247 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv"] Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.875815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" event={"ID":"58dd3197-be46-474d-84f5-c066a9483a52","Type":"ContainerStarted","Data":"83f613f0808adf7569a397c7486faf613540e1ecc9fa415488800508f4f1a434"} Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.876431 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6"] Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.878794 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podUID="58dd3197-be46-474d-84f5-c066a9483a52" Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.884910 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-4ndxm"] Feb 02 07:02:51 crc kubenswrapper[4842]: I0202 07:02:51.885969 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" event={"ID":"1fffe017-3a94-4565-9778-ccea208aa8cc","Type":"ContainerStarted","Data":"9de5bb10a9320eb1c77f72610b75362036120760815a91e3fff347ff521f0a98"} Feb 02 07:02:51 crc kubenswrapper[4842]: E0202 07:02:51.888464 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podUID="1fffe017-3a94-4565-9778-ccea208aa8cc" Feb 02 07:02:52 crc kubenswrapper[4842]: W0202 07:02:52.482245 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fb9fda7_8167_4f3d_947b_3e002278ad99.slice/crio-621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8 WatchSource:0}: Error finding container 621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8: Status 404 returned error can't find the container with id 621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8 Feb 02 07:02:52 crc kubenswrapper[4842]: I0202 07:02:52.704567 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.704701 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.704755 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:56.70473643 +0000 UTC m=+1002.082004332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:52 crc kubenswrapper[4842]: I0202 07:02:52.895522 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" event={"ID":"3fb9fda7-8167-4f3d-947b-3e002278ad99","Type":"ContainerStarted","Data":"621825b8357cbd5ed5e161eb1ac76adaaa6ddbd9ad3dc2d008ce79ce776eb9d8"} Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.897188 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podUID="58dd3197-be46-474d-84f5-c066a9483a52" Feb 02 07:02:52 crc kubenswrapper[4842]: E0202 07:02:52.898448 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podUID="1fffe017-3a94-4565-9778-ccea208aa8cc" Feb 02 07:02:53 crc kubenswrapper[4842]: W0202 07:02:53.107700 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7db6967e_a602_49a0_83f6_e1caff831173.slice/crio-6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853 WatchSource:0}: Error finding container 6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853: Status 404 returned error can't find the container with id 6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853 Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.112382 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.112514 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.112562 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:57.112546985 +0000 UTC m=+1002.489814907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.822066 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.822414 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822255 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822503 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:57.822475183 +0000 UTC m=+1003.199743095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822549 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: E0202 07:02:53.822598 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:02:57.822583755 +0000 UTC m=+1003.199851667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.917314 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" event={"ID":"7db6967e-a602-49a0-83f6-e1caff831173","Type":"ContainerStarted","Data":"6f1aa3346367608698f717b93c278fb081a45071254640485f6fc994679ae853"} Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.919002 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" event={"ID":"6344fbd8-d71a-4461-ad9a-ad71e339ba03","Type":"ContainerStarted","Data":"528cbaf33968cb73a4060888a4d50295e5bf5e75d4d7c28bbc71839e750edca1"} Feb 02 07:02:53 crc kubenswrapper[4842]: I0202 07:02:53.920414 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" event={"ID":"de128384-b923-4536-a485-33e65a1b7e04","Type":"ContainerStarted","Data":"be8887bf63c6d3d2fd4ff8c2612b2a0ef8096e3ed573a8e17c7fcfdc3145dc28"} Feb 02 07:02:56 crc kubenswrapper[4842]: I0202 07:02:56.714415 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:02:56 crc kubenswrapper[4842]: E0202 07:02:56.714563 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:56 crc kubenswrapper[4842]: E0202 07:02:56.714621 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:04.714603705 +0000 UTC m=+1010.091871617 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: I0202 07:02:57.119627 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.119820 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.120021 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:05.120004031 +0000 UTC m=+1010.497271943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: I0202 07:02:57.830366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:57 crc kubenswrapper[4842]: I0202 07:02:57.830542 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830547 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830640 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830643 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:05.830624336 +0000 UTC m=+1011.207892258 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:02:57 crc kubenswrapper[4842]: E0202 07:02:57.830706 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:05.830695068 +0000 UTC m=+1011.207962990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.747882 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:04 crc kubenswrapper[4842]: E0202 07:03:04.748594 4842 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 02 07:03:04 crc kubenswrapper[4842]: E0202 07:03:04.748652 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert podName:a020d6c0-e749-4442-93e8-64a4c463e9d5 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:20.748632739 +0000 UTC m=+1026.125900651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert") pod "infra-operator-controller-manager-79955696d6-b9qjw" (UID: "a020d6c0-e749-4442-93e8-64a4c463e9d5") : secret "infra-operator-webhook-server-cert" not found Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.992829 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" event={"ID":"255c38ec-b5b8-4017-94b8-93553884ed09","Type":"ContainerStarted","Data":"a68d2bfdff879f71626aeb99ec77e40470c07c1be76606e1137d2ce34b80668c"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.993718 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.994829 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" event={"ID":"17af9a3f-7823-4340-bebc-e50e11807467","Type":"ContainerStarted","Data":"cc0ac8431577b0a19cfa2645b2a9f92aadf4b862d21a8b09a84245d3aa7d618b"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.995188 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.996436 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" event={"ID":"c679df42-e383-4a11-a50d-af9dbd4c4eb0","Type":"ContainerStarted","Data":"9218e5a1962b1eee936d17d4a2184c3a2ce8f79672d906325f29b4c72c3cedcf"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.996762 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.998586 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" event={"ID":"7db6967e-a602-49a0-83f6-e1caff831173","Type":"ContainerStarted","Data":"e7a2e1f30bbf786d5b0be89a35166dde264ea2b83d43087ba90c90ee55d2dc03"} Feb 02 07:03:04 crc kubenswrapper[4842]: I0202 07:03:04.998801 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.000115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" event={"ID":"60d10db6-9c42-471b-84fb-58e9c04c60fc","Type":"ContainerStarted","Data":"dabef8d40aa3aadb87fe0ee4f895b59d440356a456e883e20e1c11f9a4643aac"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.000260 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.001204 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" event={"ID":"95850a5b-9e70-4f77-86ee-ff016eae6e7e","Type":"ContainerStarted","Data":"5cf5674601202f50fad92fa117b7e0d95ca06511e5cddbfd639b285d9dea79a6"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.001549 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.002686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" event={"ID":"6344fbd8-d71a-4461-ad9a-ad71e339ba03","Type":"ContainerStarted","Data":"961f25c4071b420b3c28eec91f6c5050f3efcfce377a7034ff097ae19a75d543"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.003010 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.004353 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" event={"ID":"95d96e63-61f2-4d8d-be72-562384cb6f23","Type":"ContainerStarted","Data":"9a5af20c2b4af945d6c2fa6a18cbf7ef29af80ac7cdda6ad6e779ff6683afa91"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.004693 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.006240 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" event={"ID":"46313c01-1f03-4185-b7c4-2da5420bd703","Type":"ContainerStarted","Data":"c3570aae8356f1f5bb4c29b4df257b759d70d0422b301f0b1b85795932c0cdc5"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.006433 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.007439 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" event={"ID":"0222c7fe-6311-4445-bf7f-e43fcb5ec5f9","Type":"ContainerStarted","Data":"c992b6b2f75e632507c82e80c4d1782f7fa85c9eb1d5de105398f0dd31698833"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.007786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.009260 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" event={"ID":"bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece","Type":"ContainerStarted","Data":"e4d307ab82c2777a3782aef180e59ddbe5b53a1ec6f0d1b9a26d444b3768186d"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.009648 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.010912 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" event={"ID":"b7d68fac-cffb-4dd6-8c1b-4537a3a36571","Type":"ContainerStarted","Data":"ce7bf6be491febf7a3c5ec656f9542c910889c6ee977cef8c6ef3b84d33073d8"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.011285 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.012501 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" event={"ID":"3fb9fda7-8167-4f3d-947b-3e002278ad99","Type":"ContainerStarted","Data":"ea4b6c434d50259cca3dc3812f8a27713347fe01ac4dc1e4240ce6e84ff96ad2"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.012815 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.014245 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" event={"ID":"de128384-b923-4536-a485-33e65a1b7e04","Type":"ContainerStarted","Data":"b617ca787c04cfa1d65e7476aedd58afd478467d0b358e49b18b604631129436"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.014570 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.016284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" event={"ID":"bd7497e1-afb6-44b5-8270-1021f837a65a","Type":"ContainerStarted","Data":"b9061eaf0783cbd2cba0a6e107f49421914d7d8b39f4804187ca2dd1bbcbac03"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.016610 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.018009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" event={"ID":"bda41d33-cd37-4c4d-99d6-3808993000b4","Type":"ContainerStarted","Data":"49bb9d40f48cc5192f719cb5ff828c798d3802d8a0bf6bec8d3d627b8cb1484d"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.018397 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.019476 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" event={"ID":"79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1","Type":"ContainerStarted","Data":"50267b4164965f2fc34476684b3b0b0d81c5c940bb8d1a9eb0e105acbf45d710"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.019821 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.021262 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" event={"ID":"590654af-c639-4e9d-b821-c6caa1016695","Type":"ContainerStarted","Data":"9e3864674ef6fe202b85b741727442cf290c7be8379a0ecd3a08bc6c4a19ff97"} Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.021612 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.076340 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" podStartSLOduration=2.880715941 podStartE2EDuration="16.076323622s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.610940492 +0000 UTC m=+995.988208404" lastFinishedPulling="2026-02-02 07:03:03.806548163 +0000 UTC m=+1009.183816085" observedRunningTime="2026-02-02 07:03:05.029412576 +0000 UTC m=+1010.406680488" watchObservedRunningTime="2026-02-02 07:03:05.076323622 +0000 UTC m=+1010.453591534" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.146352 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" podStartSLOduration=3.8937158309999997 podStartE2EDuration="17.146336256s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.548677758 +0000 UTC m=+995.925945670" lastFinishedPulling="2026-02-02 07:03:03.801298173 +0000 UTC m=+1009.178566095" observedRunningTime="2026-02-02 07:03:05.076531707 +0000 UTC m=+1010.453799639" watchObservedRunningTime="2026-02-02 07:03:05.146336256 +0000 UTC m=+1010.523604168" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.146576 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" podStartSLOduration=3.955278608 podStartE2EDuration="17.146572622s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.614071759 +0000 UTC m=+995.991339671" lastFinishedPulling="2026-02-02 07:03:03.805365763 +0000 UTC m=+1009.182633685" observedRunningTime="2026-02-02 07:03:05.143868565 +0000 UTC m=+1010.521136477" watchObservedRunningTime="2026-02-02 07:03:05.146572622 +0000 UTC m=+1010.523840534" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.155982 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.158849 4842 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.158902 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert podName:5e7a9701-ed45-4289-8272-f850efbf1e75 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:21.158884995 +0000 UTC m=+1026.536152907 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert") pod "openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" (UID: "5e7a9701-ed45-4289-8272-f850efbf1e75") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.222986 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" podStartSLOduration=3.966528915 podStartE2EDuration="17.222970694s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.552738848 +0000 UTC m=+995.930006760" lastFinishedPulling="2026-02-02 07:03:03.809180617 +0000 UTC m=+1009.186448539" observedRunningTime="2026-02-02 07:03:05.181919873 +0000 UTC m=+1010.559187795" watchObservedRunningTime="2026-02-02 07:03:05.222970694 +0000 UTC m=+1010.600238606" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.224273 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" podStartSLOduration=5.529837268 podStartE2EDuration="16.224267646s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:53.111017007 +0000 UTC m=+998.488284929" lastFinishedPulling="2026-02-02 07:03:03.805447385 +0000 UTC m=+1009.182715307" observedRunningTime="2026-02-02 07:03:05.218496754 +0000 UTC m=+1010.595764666" watchObservedRunningTime="2026-02-02 07:03:05.224267646 +0000 UTC m=+1010.601535558" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.259805 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" podStartSLOduration=4.003777633 podStartE2EDuration="17.259788151s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.54912941 +0000 UTC m=+995.926397322" lastFinishedPulling="2026-02-02 07:03:03.805139928 +0000 UTC m=+1009.182407840" observedRunningTime="2026-02-02 07:03:05.255493035 +0000 UTC m=+1010.632760947" watchObservedRunningTime="2026-02-02 07:03:05.259788151 +0000 UTC m=+1010.637056063" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.308840 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" podStartSLOduration=4.182488865 podStartE2EDuration="17.308824509s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.616490259 +0000 UTC m=+995.993758171" lastFinishedPulling="2026-02-02 07:03:03.742825893 +0000 UTC m=+1009.120093815" observedRunningTime="2026-02-02 07:03:05.303126009 +0000 UTC m=+1010.680393921" watchObservedRunningTime="2026-02-02 07:03:05.308824509 +0000 UTC m=+1010.686092421" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.389023 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" podStartSLOduration=8.97582006 podStartE2EDuration="17.389005124s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.803444821 +0000 UTC m=+995.180712733" lastFinishedPulling="2026-02-02 07:02:58.216629885 +0000 UTC m=+1003.593897797" observedRunningTime="2026-02-02 07:03:05.347772858 +0000 UTC m=+1010.725040770" watchObservedRunningTime="2026-02-02 07:03:05.389005124 +0000 UTC m=+1010.766273036" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.392185 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" podStartSLOduration=10.596458992 podStartE2EDuration="17.392174612s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.827736349 +0000 UTC m=+995.205004261" lastFinishedPulling="2026-02-02 07:02:56.623451969 +0000 UTC m=+1002.000719881" observedRunningTime="2026-02-02 07:03:05.38436828 +0000 UTC m=+1010.761636192" watchObservedRunningTime="2026-02-02 07:03:05.392174612 +0000 UTC m=+1010.769442524" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.426118 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" podStartSLOduration=3.839838275 podStartE2EDuration="17.426103228s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.156614271 +0000 UTC m=+995.533882183" lastFinishedPulling="2026-02-02 07:03:03.742879214 +0000 UTC m=+1009.120147136" observedRunningTime="2026-02-02 07:03:05.420606222 +0000 UTC m=+1010.797874134" watchObservedRunningTime="2026-02-02 07:03:05.426103228 +0000 UTC m=+1010.803371140" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.458887 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" podStartSLOduration=10.82209248 podStartE2EDuration="17.458868085s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.986673114 +0000 UTC m=+995.363941026" lastFinishedPulling="2026-02-02 07:02:56.623448719 +0000 UTC m=+1002.000716631" observedRunningTime="2026-02-02 07:03:05.452684633 +0000 UTC m=+1010.829952545" watchObservedRunningTime="2026-02-02 07:03:05.458868085 +0000 UTC m=+1010.836135997" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.486673 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" podStartSLOduration=4.231715227 podStartE2EDuration="17.486652399s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.550457782 +0000 UTC m=+995.927725694" lastFinishedPulling="2026-02-02 07:03:03.805394954 +0000 UTC m=+1009.182662866" observedRunningTime="2026-02-02 07:03:05.486517256 +0000 UTC m=+1010.863785168" watchObservedRunningTime="2026-02-02 07:03:05.486652399 +0000 UTC m=+1010.863920311" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.537996 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" podStartSLOduration=5.849944973 podStartE2EDuration="16.537977904s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:53.111493029 +0000 UTC m=+998.488760951" lastFinishedPulling="2026-02-02 07:03:03.79952596 +0000 UTC m=+1009.176793882" observedRunningTime="2026-02-02 07:03:05.537453341 +0000 UTC m=+1010.914721273" watchObservedRunningTime="2026-02-02 07:03:05.537977904 +0000 UTC m=+1010.915245806" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.585064 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" podStartSLOduration=10.347083448 podStartE2EDuration="17.585046883s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:49.910758834 +0000 UTC m=+995.288026746" lastFinishedPulling="2026-02-02 07:02:57.148722269 +0000 UTC m=+1002.525990181" observedRunningTime="2026-02-02 07:03:05.583126936 +0000 UTC m=+1010.960394848" watchObservedRunningTime="2026-02-02 07:03:05.585046883 +0000 UTC m=+1010.962314795" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.614004 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" podStartSLOduration=5.29528772 podStartE2EDuration="16.613991636s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:52.486554575 +0000 UTC m=+997.863822487" lastFinishedPulling="2026-02-02 07:03:03.805258481 +0000 UTC m=+1009.182526403" observedRunningTime="2026-02-02 07:03:05.612669384 +0000 UTC m=+1010.989937296" watchObservedRunningTime="2026-02-02 07:03:05.613991636 +0000 UTC m=+1010.991259548" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.639527 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" podStartSLOduration=4.393588235 podStartE2EDuration="17.639510945s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.554748288 +0000 UTC m=+995.932016200" lastFinishedPulling="2026-02-02 07:03:03.800670978 +0000 UTC m=+1009.177938910" observedRunningTime="2026-02-02 07:03:05.637111736 +0000 UTC m=+1011.014379648" watchObservedRunningTime="2026-02-02 07:03:05.639510945 +0000 UTC m=+1011.016778847" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.669428 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" podStartSLOduration=6.520832225 podStartE2EDuration="17.669409741s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.142435701 +0000 UTC m=+995.519703613" lastFinishedPulling="2026-02-02 07:03:01.291013177 +0000 UTC m=+1006.668281129" observedRunningTime="2026-02-02 07:03:05.661536467 +0000 UTC m=+1011.038804379" watchObservedRunningTime="2026-02-02 07:03:05.669409741 +0000 UTC m=+1011.046677653" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.704672 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" podStartSLOduration=5.981290369 podStartE2EDuration="16.70465727s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:53.111426927 +0000 UTC m=+998.488694849" lastFinishedPulling="2026-02-02 07:03:03.834793838 +0000 UTC m=+1009.212061750" observedRunningTime="2026-02-02 07:03:05.700557309 +0000 UTC m=+1011.077825221" watchObservedRunningTime="2026-02-02 07:03:05.70465727 +0000 UTC m=+1011.081925182" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.878017 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:05 crc kubenswrapper[4842]: I0202 07:03:05.878098 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878157 4842 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878198 4842 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878226 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:21.878201475 +0000 UTC m=+1027.255469387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "webhook-server-cert" not found Feb 02 07:03:05 crc kubenswrapper[4842]: E0202 07:03:05.878244 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs podName:6b1810ad-df0b-44b5-8ba8-953039b85411 nodeName:}" failed. No retries permitted until 2026-02-02 07:03:21.878233145 +0000 UTC m=+1027.255501057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs") pod "openstack-operator-controller-manager-6b6f655c79-bwmdm" (UID: "6b1810ad-df0b-44b5-8ba8-953039b85411") : secret "metrics-server-cert" not found Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.098870 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-4hrlz" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.099419 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-xq5nz" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.141994 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-96sfj" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.341971 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-stkw6" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.354352 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-jmvqq" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.363553 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-jknjh" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.467786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-skdgw" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.546877 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-kz2zn" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.583847 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-nzz4p" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.612646 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-c9lwb" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.647771 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-nsf9v" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.667127 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-4zk9c" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.680867 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-wpm9z" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.714896 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-d8nns" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.782666 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-lbjfv" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.845652 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-q7vh6" Feb 02 07:03:09 crc kubenswrapper[4842]: I0202 07:03:09.914461 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-4q9m5" Feb 02 07:03:10 crc kubenswrapper[4842]: I0202 07:03:10.075405 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-4ndxm" Feb 02 07:03:12 crc kubenswrapper[4842]: I0202 07:03:12.085865 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" event={"ID":"58dd3197-be46-474d-84f5-c066a9483a52","Type":"ContainerStarted","Data":"c2bb7d6b16e06976dbd7930e31a90a72bfecf22fad35c493144fb56e6d35e484"} Feb 02 07:03:12 crc kubenswrapper[4842]: I0202 07:03:12.115497 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" podStartSLOduration=2.797290336 podStartE2EDuration="23.115473909s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.767919719 +0000 UTC m=+996.145187631" lastFinishedPulling="2026-02-02 07:03:11.086103292 +0000 UTC m=+1016.463371204" observedRunningTime="2026-02-02 07:03:12.111727107 +0000 UTC m=+1017.488995079" watchObservedRunningTime="2026-02-02 07:03:12.115473909 +0000 UTC m=+1017.492741821" Feb 02 07:03:14 crc kubenswrapper[4842]: I0202 07:03:14.101770 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" event={"ID":"1fffe017-3a94-4565-9778-ccea208aa8cc","Type":"ContainerStarted","Data":"6c3269643ef3bf6010400ce9141ac23c939307b53f5e5561fcba170a103c369c"} Feb 02 07:03:14 crc kubenswrapper[4842]: I0202 07:03:14.126084 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zbqhn" podStartSLOduration=2.770613809 podStartE2EDuration="25.126041676s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:02:50.763438089 +0000 UTC m=+996.140706001" lastFinishedPulling="2026-02-02 07:03:13.118865966 +0000 UTC m=+1018.496133868" observedRunningTime="2026-02-02 07:03:14.118601643 +0000 UTC m=+1019.495869555" watchObservedRunningTime="2026-02-02 07:03:14.126041676 +0000 UTC m=+1019.503309628" Feb 02 07:03:19 crc kubenswrapper[4842]: I0202 07:03:19.730806 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:03:19 crc kubenswrapper[4842]: I0202 07:03:19.733694 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-qlxtv" Feb 02 07:03:20 crc kubenswrapper[4842]: I0202 07:03:20.796832 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:20 crc kubenswrapper[4842]: I0202 07:03:20.804768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a020d6c0-e749-4442-93e8-64a4c463e9d5-cert\") pod \"infra-operator-controller-manager-79955696d6-b9qjw\" (UID: \"a020d6c0-e749-4442-93e8-64a4c463e9d5\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.029904 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-867vq" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.038374 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.202388 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.211966 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e7a9701-ed45-4289-8272-f850efbf1e75-cert\") pod \"openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb\" (UID: \"5e7a9701-ed45-4289-8272-f850efbf1e75\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.218630 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4btph" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.227308 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.454184 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb"] Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.525008 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw"] Feb 02 07:03:21 crc kubenswrapper[4842]: W0202 07:03:21.531247 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda020d6c0_e749_4442_93e8_64a4c463e9d5.slice/crio-d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3 WatchSource:0}: Error finding container d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3: Status 404 returned error can't find the container with id d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3 Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.913610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.914139 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.918691 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-metrics-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:21 crc kubenswrapper[4842]: I0202 07:03:21.919244 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6b1810ad-df0b-44b5-8ba8-953039b85411-webhook-certs\") pod \"openstack-operator-controller-manager-6b6f655c79-bwmdm\" (UID: \"6b1810ad-df0b-44b5-8ba8-953039b85411\") " pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.183783 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" event={"ID":"5e7a9701-ed45-4289-8272-f850efbf1e75","Type":"ContainerStarted","Data":"b6c3d4d33934752d92a00f58f8958167d1f85b985028cc89e6bd099f41ed776c"} Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.185023 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" event={"ID":"a020d6c0-e749-4442-93e8-64a4c463e9d5","Type":"ContainerStarted","Data":"d282524454d04afa92ca9bad9c8d8ab334b02c0655d84ce6c4600fb7cb37b2f3"} Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.208672 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-j9fct" Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.217002 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:22 crc kubenswrapper[4842]: I0202 07:03:22.658438 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm"] Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.204058 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" event={"ID":"a020d6c0-e749-4442-93e8-64a4c463e9d5","Type":"ContainerStarted","Data":"cb265626b6818b1fda83dd99bc60d3d3c5c79c30adce8ae720684c2777192fc3"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.204488 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.206246 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" event={"ID":"6b1810ad-df0b-44b5-8ba8-953039b85411","Type":"ContainerStarted","Data":"83762abb9d1ff837f25e4edd01995bd03f53602bbca0538cb380c8eb1fe5c545"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.206274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" event={"ID":"6b1810ad-df0b-44b5-8ba8-953039b85411","Type":"ContainerStarted","Data":"d6acfb12136f98c0ee2b09e8104a7152c04c7d19670fca6dfe9d3607317a820f"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.206400 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.207770 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" event={"ID":"5e7a9701-ed45-4289-8272-f850efbf1e75","Type":"ContainerStarted","Data":"fd7fa6c2a404fb6154becdfe49988ce67634b0f4531f1d2c95c37c52ba4e14a7"} Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.208264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.227841 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" podStartSLOduration=33.956867714 podStartE2EDuration="36.227813256s" podCreationTimestamp="2026-02-02 07:02:48 +0000 UTC" firstStartedPulling="2026-02-02 07:03:21.534075979 +0000 UTC m=+1026.911343891" lastFinishedPulling="2026-02-02 07:03:23.805021511 +0000 UTC m=+1029.182289433" observedRunningTime="2026-02-02 07:03:24.223424997 +0000 UTC m=+1029.600692919" watchObservedRunningTime="2026-02-02 07:03:24.227813256 +0000 UTC m=+1029.605081198" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.255596 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" podStartSLOduration=35.255576189 podStartE2EDuration="35.255576189s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:03:24.252402161 +0000 UTC m=+1029.629670083" watchObservedRunningTime="2026-02-02 07:03:24.255576189 +0000 UTC m=+1029.632844111" Feb 02 07:03:24 crc kubenswrapper[4842]: I0202 07:03:24.293911 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" podStartSLOduration=32.986494234 podStartE2EDuration="35.293886853s" podCreationTimestamp="2026-02-02 07:02:49 +0000 UTC" firstStartedPulling="2026-02-02 07:03:21.496267728 +0000 UTC m=+1026.873535640" lastFinishedPulling="2026-02-02 07:03:23.803660337 +0000 UTC m=+1029.180928259" observedRunningTime="2026-02-02 07:03:24.287758122 +0000 UTC m=+1029.665026034" watchObservedRunningTime="2026-02-02 07:03:24.293886853 +0000 UTC m=+1029.671154765" Feb 02 07:03:31 crc kubenswrapper[4842]: I0202 07:03:31.044799 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-b9qjw" Feb 02 07:03:31 crc kubenswrapper[4842]: I0202 07:03:31.236478 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb" Feb 02 07:03:32 crc kubenswrapper[4842]: I0202 07:03:32.233264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b6f655c79-bwmdm" Feb 02 07:03:42 crc kubenswrapper[4842]: I0202 07:03:42.146741 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:03:42 crc kubenswrapper[4842]: I0202 07:03:42.147405 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.882317 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.884110 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.885858 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-h5n7q" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.888535 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.888878 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.889150 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.899721 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.961202 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.962197 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:50 crc kubenswrapper[4842]: I0202 07:03:50.964998 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.101761 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.101866 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.101936 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.131319 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.203616 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.203867 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.203961 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.204114 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.204264 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.205199 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.205235 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.222209 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"dnsmasq-dns-5f854695bc-nkfxn\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.305300 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.305645 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.306724 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.323358 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"dnsmasq-dns-84bb9d8bd9-nnwvg\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.428518 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.456833 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.704708 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.740563 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:03:51 crc kubenswrapper[4842]: I0202 07:03:51.955617 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:51 crc kubenswrapper[4842]: W0202 07:03:51.958366 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode957a502_d44b_4b06_97c1_e0d7c9d75865.slice/crio-a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d WatchSource:0}: Error finding container a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d: Status 404 returned error can't find the container with id a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.438629 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" event={"ID":"bc463aa5-6e00-466a-8cba-7d1370a7c79b","Type":"ContainerStarted","Data":"43b019fa43de3914a140a52df26f02dc7038a30388bbfbca8f30181349c5a701"} Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.440547 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" event={"ID":"e957a502-d44b-4b06-97c1-e0d7c9d75865","Type":"ContainerStarted","Data":"a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d"} Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.706553 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.740983 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.742376 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.753208 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.923510 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.923548 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:52 crc kubenswrapper[4842]: I0202 07:03:52.923569 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.024567 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.024618 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.024639 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.025785 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.026273 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.048327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"dnsmasq-dns-744ffd65bc-v87kh\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.077251 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.378256 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.394209 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.396179 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.402003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.532320 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.532433 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.532490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.552409 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.635065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.635208 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.635289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.636321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.636335 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.676895 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"dnsmasq-dns-95f5f6995-k5tj8\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.728167 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.841904 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.842987 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.847145 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.847366 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.847483 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.848813 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.851386 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.852262 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.854382 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.854459 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p5ttv" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951332 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951738 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951764 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951789 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.951844 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952072 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952098 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952123 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:53 crc kubenswrapper[4842]: I0202 07:03:53.952138 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053959 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.053994 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054057 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054083 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054431 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054730 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054828 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054853 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.054980 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055018 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055060 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055082 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055153 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.055719 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.057063 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.058406 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.059079 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.059118 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.059271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.068702 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.074243 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.165543 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.169165 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:03:54 crc kubenswrapper[4842]: W0202 07:03:54.199916 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11728eb4_1f90_43b9_a299_1c906e4445a2.slice/crio-9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606 WatchSource:0}: Error finding container 9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606: Status 404 returned error can't find the container with id 9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606 Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.459775 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerStarted","Data":"9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606"} Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.461253 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerStarted","Data":"8546f85ea074aefba993cdb0bf6ad37f1ca8e108781983b99c2bd584652a33a1"} Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.514233 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.515344 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.517602 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.517707 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lt4fp" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.519628 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.520532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.520771 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.520976 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.521122 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.528624 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.569706 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:03:54 crc kubenswrapper[4842]: W0202 07:03:54.575898 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b2ca532_dbbc_4148_8d2f_fc474685f0bd.slice/crio-63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83 WatchSource:0}: Error finding container 63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83: Status 404 returned error can't find the container with id 63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83 Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672638 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672877 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672936 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672952 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672969 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.672989 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673061 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.673076 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774645 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774685 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774757 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774780 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774814 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774828 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774843 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774860 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774881 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.774898 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.775802 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.775828 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.776538 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.776847 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.778503 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.779072 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.786862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.789963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.790955 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.791070 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.809404 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.816964 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:54 crc kubenswrapper[4842]: I0202 07:03:54.858397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.414647 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:03:55 crc kubenswrapper[4842]: W0202 07:03:55.425669 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod441d47f7_e5dd_456f_b6fa_10a642be6742.slice/crio-f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c WatchSource:0}: Error finding container f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c: Status 404 returned error can't find the container with id f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.472188 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerStarted","Data":"f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c"} Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.473452 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerStarted","Data":"63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83"} Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.870306 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.873403 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.877874 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xfhgf" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.877953 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.878046 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.881877 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.883889 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 02 07:03:55 crc kubenswrapper[4842]: I0202 07:03:55.884601 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.000977 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001056 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001094 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001137 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001157 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001237 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.001319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102128 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102196 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102244 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102272 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102340 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102419 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.102445 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.103042 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.103451 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.103789 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.104205 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.107015 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.117722 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.120327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.124890 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.139551 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.208301 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:03:56 crc kubenswrapper[4842]: I0202 07:03:56.729335 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:03:56 crc kubenswrapper[4842]: W0202 07:03:56.737501 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod709c39fb_802f_4690_89f6_41a717e7244c.slice/crio-b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e WatchSource:0}: Error finding container b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e: Status 404 returned error can't find the container with id b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.366251 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.368373 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.371788 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-glnh2" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.371936 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.372096 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.372273 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.392339 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.507351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerStarted","Data":"b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e"} Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537298 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537377 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537420 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537445 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537470 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537496 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537514 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.537540 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638714 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638777 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638812 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638853 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638870 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638894 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638948 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.638984 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640009 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640201 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640533 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.640880 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.641025 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.646457 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.649151 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.667595 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.668458 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.672757 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.672894 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.672956 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-krkzp" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.690438 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.692353 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.711264 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.745960 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746003 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.746110 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847536 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847587 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847617 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847701 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.847733 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.848457 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.848457 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.851566 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.855974 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.873565 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"memcached-0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " pod="openstack/memcached-0" Feb 02 07:03:57 crc kubenswrapper[4842]: I0202 07:03:57.998106 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:03:58 crc kubenswrapper[4842]: I0202 07:03:58.061547 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.543371 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.544731 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.546642 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-g5fgs" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.558808 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.675122 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"kube-state-metrics-0\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.776056 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"kube-state-metrics-0\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.794604 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"kube-state-metrics-0\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " pod="openstack/kube-state-metrics-0" Feb 02 07:03:59 crc kubenswrapper[4842]: I0202 07:03:59.872154 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.446263 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.447800 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.453076 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.455042 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.460523 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.460642 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.460693 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-wv7db" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.468012 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.492572 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532187 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532240 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532261 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532289 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532486 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532670 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532689 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532713 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532792 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532900 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.532918 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634395 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634524 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634548 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634625 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634648 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634671 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634708 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634738 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634784 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634805 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634826 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.634853 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.635446 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638312 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638315 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638549 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.638694 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639132 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639340 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.639483 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.651905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.655024 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.655810 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ovn-controller-ovs-vctt8\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.681155 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"ovn-controller-sgwrm\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.720397 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.721493 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.724572 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.726199 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.733467 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.733802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qt89x" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.738139 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.747602 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.775742 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.795089 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837399 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837465 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837492 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837514 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837533 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837581 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837610 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.837641 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938745 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938824 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938896 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938949 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938981 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.938998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.939021 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.939383 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.939473 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.940332 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.941265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.951986 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.952006 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.952085 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.957051 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:03 crc kubenswrapper[4842]: I0202 07:04:03.958857 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"ovsdbserver-nb-0\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:04 crc kubenswrapper[4842]: I0202 07:04:04.037661 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.039691 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.042578 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.046446 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.047675 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-r55ms" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.047861 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.050561 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.058565 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101123 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101178 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101203 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101237 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101261 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101488 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101591 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.101786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.209629 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210047 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210124 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210205 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210290 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210333 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210410 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.210559 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.211064 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.211706 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.211767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.216569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.223158 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.225201 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.230742 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.243935 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:07 crc kubenswrapper[4842]: I0202 07:04:07.365534 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:11 crc kubenswrapper[4842]: E0202 07:04:11.631986 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Feb 02 07:04:11 crc kubenswrapper[4842]: E0202 07:04:11.632551 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n8dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(441d47f7-e5dd-456f-b6fa-10a642be6742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:11 crc kubenswrapper[4842]: E0202 07:04:11.633713 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" Feb 02 07:04:12 crc kubenswrapper[4842]: I0202 07:04:12.145932 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:04:12 crc kubenswrapper[4842]: I0202 07:04:12.146265 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.537753 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.537973 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5mdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-nkfxn_openstack(e957a502-d44b-4b06-97c1-e0d7c9d75865): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.539160 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" podUID="e957a502-d44b-4b06-97c1-e0d7c9d75865" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.560116 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.560409 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ttm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2b2ca532-dbbc-4148-8d2f-fc474685f0bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.561786 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.574947 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.575125 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhzgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-744ffd65bc-v87kh_openstack(b03422f3-6220-40a9-b410-390213ff282e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.576420 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podUID="b03422f3-6220-40a9-b410-390213ff282e" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583130 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583283 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-nnwvg_openstack(bc463aa5-6e00-466a-8cba-7d1370a7c79b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583346 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.583523 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4hpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-k5tj8_openstack(11728eb4-1f90-43b9-a299-1c906e4445a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.584765 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" podUID="bc463aa5-6e00-466a-8cba-7d1370a7c79b" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.585465 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639045 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podUID="b03422f3-6220-40a9-b410-390213ff282e" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639323 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639368 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" Feb 02 07:04:12 crc kubenswrapper[4842]: E0202 07:04:12.639411 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.151087 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.178255 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266296 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") pod \"e957a502-d44b-4b06-97c1-e0d7c9d75865\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266421 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") pod \"e957a502-d44b-4b06-97c1-e0d7c9d75865\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266472 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") pod \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266519 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") pod \"e957a502-d44b-4b06-97c1-e0d7c9d75865\" (UID: \"e957a502-d44b-4b06-97c1-e0d7c9d75865\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.266545 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") pod \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\" (UID: \"bc463aa5-6e00-466a-8cba-7d1370a7c79b\") " Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.267292 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config" (OuterVolumeSpecName: "config") pod "e957a502-d44b-4b06-97c1-e0d7c9d75865" (UID: "e957a502-d44b-4b06-97c1-e0d7c9d75865"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.267606 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config" (OuterVolumeSpecName: "config") pod "bc463aa5-6e00-466a-8cba-7d1370a7c79b" (UID: "bc463aa5-6e00-466a-8cba-7d1370a7c79b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.267887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e957a502-d44b-4b06-97c1-e0d7c9d75865" (UID: "e957a502-d44b-4b06-97c1-e0d7c9d75865"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.272049 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2" (OuterVolumeSpecName: "kube-api-access-6f9p2") pod "bc463aa5-6e00-466a-8cba-7d1370a7c79b" (UID: "bc463aa5-6e00-466a-8cba-7d1370a7c79b"). InnerVolumeSpecName "kube-api-access-6f9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.273407 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp" (OuterVolumeSpecName: "kube-api-access-z5mdp") pod "e957a502-d44b-4b06-97c1-e0d7c9d75865" (UID: "e957a502-d44b-4b06-97c1-e0d7c9d75865"). InnerVolumeSpecName "kube-api-access-z5mdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368170 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368232 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9p2\" (UniqueName: \"kubernetes.io/projected/bc463aa5-6e00-466a-8cba-7d1370a7c79b-kube-api-access-6f9p2\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368246 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5mdp\" (UniqueName: \"kubernetes.io/projected/e957a502-d44b-4b06-97c1-e0d7c9d75865-kube-api-access-z5mdp\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368261 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957a502-d44b-4b06-97c1-e0d7c9d75865-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.368271 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc463aa5-6e00-466a-8cba-7d1370a7c79b-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.449304 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.512956 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:04:16 crc kubenswrapper[4842]: W0202 07:04:16.516248 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode467a49f_fdc1_4a9e_9907_4425f5ec6177.slice/crio-e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad WatchSource:0}: Error finding container e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad: Status 404 returned error can't find the container with id e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad Feb 02 07:04:16 crc kubenswrapper[4842]: W0202 07:04:16.517533 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed4dadb_b854_4082_b18a_67f58543bb9a.slice/crio-fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd WatchSource:0}: Error finding container fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd: Status 404 returned error can't find the container with id fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.518902 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.642136 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.650922 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.701814 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerStarted","Data":"ccad06562fb6f40d062777e6d3a6e4d9830ae7a447085c52c329d40fd37ced11"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.703570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerStarted","Data":"29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.703628 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerStarted","Data":"fdc6e41336cf566f37f5d6d1c8f0d838d650c8a494fb96e4662f58397bbe8dbd"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.705840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerStarted","Data":"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.706886 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerStarted","Data":"db5e53906e871ace039a809b4c17e0f0a9393b7521bbea23546882f45795c673"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.708000 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerStarted","Data":"1455920f56b035102336b6030ca95115000c538e6e505a3b940faf00be0a7147"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.708803 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" event={"ID":"e957a502-d44b-4b06-97c1-e0d7c9d75865","Type":"ContainerDied","Data":"a73d47ab78f64b8b040e07ad9764e19630bd5e8dcd1d54e7b40a33a598434b5d"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.708865 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-nkfxn" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.711176 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" event={"ID":"bc463aa5-6e00-466a-8cba-7d1370a7c79b","Type":"ContainerDied","Data":"43b019fa43de3914a140a52df26f02dc7038a30388bbfbca8f30181349c5a701"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.711291 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-nnwvg" Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.714329 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerStarted","Data":"e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad"} Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.746002 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.813806 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.826252 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-nnwvg"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.852952 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.861392 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-nkfxn"] Feb 02 07:04:16 crc kubenswrapper[4842]: I0202 07:04:16.874624 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.444864 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc463aa5-6e00-466a-8cba-7d1370a7c79b" path="/var/lib/kubelet/pods/bc463aa5-6e00-466a-8cba-7d1370a7c79b/volumes" Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.445316 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e957a502-d44b-4b06-97c1-e0d7c9d75865" path="/var/lib/kubelet/pods/e957a502-d44b-4b06-97c1-e0d7c9d75865/volumes" Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.738558 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerStarted","Data":"20790a3e9ff5cd63d4fa516d28e246cafad534d4d8104c6a1f16eb5a3c586904"} Feb 02 07:04:17 crc kubenswrapper[4842]: I0202 07:04:17.740575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerStarted","Data":"0b86eb955efed6c0beae4754f7a259bd87ec4d6377bfa3532f73d18514ea5e3d"} Feb 02 07:04:19 crc kubenswrapper[4842]: I0202 07:04:19.757592 4842 generic.go:334] "Generic (PLEG): container finished" podID="709c39fb-802f-4690-89f6-41a717e7244c" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" exitCode=0 Feb 02 07:04:19 crc kubenswrapper[4842]: I0202 07:04:19.757647 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerDied","Data":"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d"} Feb 02 07:04:20 crc kubenswrapper[4842]: I0202 07:04:20.767760 4842 generic.go:334] "Generic (PLEG): container finished" podID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerID="29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950" exitCode=0 Feb 02 07:04:20 crc kubenswrapper[4842]: I0202 07:04:20.767822 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerDied","Data":"29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.787485 4842 generic.go:334] "Generic (PLEG): container finished" podID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerID="0e2b21c37cc6f772bef7c4e80d3e6f156ca0d9772f52dfdc03a69fbc57f8dd8b" exitCode=0 Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.787573 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"0e2b21c37cc6f772bef7c4e80d3e6f156ca0d9772f52dfdc03a69fbc57f8dd8b"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.792195 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerStarted","Data":"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.795060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerStarted","Data":"c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.796559 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerStarted","Data":"7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.796803 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.801923 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerStarted","Data":"6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.803460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerStarted","Data":"42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.803565 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.818919 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerStarted","Data":"95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.819115 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.821727 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerStarted","Data":"6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d"} Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.831549 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sgwrm" podStartSLOduration=15.174366101 podStartE2EDuration="19.831528672s" podCreationTimestamp="2026-02-02 07:04:03 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.518048531 +0000 UTC m=+1081.895316443" lastFinishedPulling="2026-02-02 07:04:21.175211092 +0000 UTC m=+1086.552479014" observedRunningTime="2026-02-02 07:04:22.829966374 +0000 UTC m=+1088.207234306" watchObservedRunningTime="2026-02-02 07:04:22.831528672 +0000 UTC m=+1088.208796584" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.891225 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=18.593767379 podStartE2EDuration="23.891193142s" podCreationTimestamp="2026-02-02 07:03:59 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.663349101 +0000 UTC m=+1082.040617013" lastFinishedPulling="2026-02-02 07:04:21.960774864 +0000 UTC m=+1087.338042776" observedRunningTime="2026-02-02 07:04:22.885266836 +0000 UTC m=+1088.262534748" watchObservedRunningTime="2026-02-02 07:04:22.891193142 +0000 UTC m=+1088.268461054" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.917382 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.702933604 podStartE2EDuration="28.917358407s" podCreationTimestamp="2026-02-02 07:03:54 +0000 UTC" firstStartedPulling="2026-02-02 07:03:56.739980764 +0000 UTC m=+1062.117248676" lastFinishedPulling="2026-02-02 07:04:15.954405547 +0000 UTC m=+1081.331673479" observedRunningTime="2026-02-02 07:04:22.912075076 +0000 UTC m=+1088.289342988" watchObservedRunningTime="2026-02-02 07:04:22.917358407 +0000 UTC m=+1088.294626319" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.945958 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.945942511 podStartE2EDuration="26.945942511s" podCreationTimestamp="2026-02-02 07:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:22.93617953 +0000 UTC m=+1088.313447462" watchObservedRunningTime="2026-02-02 07:04:22.945942511 +0000 UTC m=+1088.323210423" Feb 02 07:04:22 crc kubenswrapper[4842]: I0202 07:04:22.981993 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.181974688 podStartE2EDuration="25.981978518s" podCreationTimestamp="2026-02-02 07:03:57 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.453990293 +0000 UTC m=+1081.831258205" lastFinishedPulling="2026-02-02 07:04:21.253994103 +0000 UTC m=+1086.631262035" observedRunningTime="2026-02-02 07:04:22.981922687 +0000 UTC m=+1088.359190599" watchObservedRunningTime="2026-02-02 07:04:22.981978518 +0000 UTC m=+1088.359246430" Feb 02 07:04:23 crc kubenswrapper[4842]: I0202 07:04:23.831628 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerStarted","Data":"a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.850274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerStarted","Data":"3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.850707 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.850743 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.853241 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerStarted","Data":"12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.855020 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerStarted","Data":"c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266"} Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.876865 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-vctt8" podStartSLOduration=17.361547779 podStartE2EDuration="21.876834205s" podCreationTimestamp="2026-02-02 07:04:03 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.871790925 +0000 UTC m=+1082.249058837" lastFinishedPulling="2026-02-02 07:04:21.387077321 +0000 UTC m=+1086.764345263" observedRunningTime="2026-02-02 07:04:24.87213347 +0000 UTC m=+1090.249401442" watchObservedRunningTime="2026-02-02 07:04:24.876834205 +0000 UTC m=+1090.254102157" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.905466 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.862718095 podStartE2EDuration="18.90544596s" podCreationTimestamp="2026-02-02 07:04:06 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.664743735 +0000 UTC m=+1082.042011647" lastFinishedPulling="2026-02-02 07:04:23.70747156 +0000 UTC m=+1089.084739512" observedRunningTime="2026-02-02 07:04:24.900788946 +0000 UTC m=+1090.278056868" watchObservedRunningTime="2026-02-02 07:04:24.90544596 +0000 UTC m=+1090.282713892" Feb 02 07:04:24 crc kubenswrapper[4842]: I0202 07:04:24.933266 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.973020733 podStartE2EDuration="22.933198594s" podCreationTimestamp="2026-02-02 07:04:02 +0000 UTC" firstStartedPulling="2026-02-02 07:04:16.755346727 +0000 UTC m=+1082.132614629" lastFinishedPulling="2026-02-02 07:04:23.715524578 +0000 UTC m=+1089.092792490" observedRunningTime="2026-02-02 07:04:24.924835978 +0000 UTC m=+1090.302103900" watchObservedRunningTime="2026-02-02 07:04:24.933198594 +0000 UTC m=+1090.310466536" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.038930 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.091093 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: E0202 07:04:25.126838 4842 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.169:55736->38.102.83.169:45991: read tcp 38.102.83.169:55736->38.102.83.169:45991: read: connection reset by peer Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.366312 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.420790 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.862377 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:25 crc kubenswrapper[4842]: I0202 07:04:25.862665 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.209099 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.209153 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.872299 4842 generic.go:334] "Generic (PLEG): container finished" podID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" exitCode=0 Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.872423 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerDied","Data":"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48"} Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.874310 4842 generic.go:334] "Generic (PLEG): container finished" podID="b03422f3-6220-40a9-b410-390213ff282e" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" exitCode=0 Feb 02 07:04:26 crc kubenswrapper[4842]: I0202 07:04:26.874687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerDied","Data":"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.420277 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.639506 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.674763 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.675988 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.678844 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.709140 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.710056 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.711612 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.719550 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.753414 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772605 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772724 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772771 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772825 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772888 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772909 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772937 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772973 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.772998 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874793 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874847 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874902 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874942 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874972 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.874996 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875038 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875803 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875882 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.875947 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.877977 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.880309 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.883130 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.890300 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.895880 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.900838 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"ovn-controller-metrics-4glck\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.906698 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerStarted","Data":"15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.910130 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"dnsmasq-dns-794868bd45-ljcbj\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.911609 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.923026 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" containerID="cri-o://42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" gracePeriod=10 Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.922926 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerStarted","Data":"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.924361 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.935900 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" containerID="cri-o://03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" gracePeriod=10 Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.936128 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerStarted","Data":"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a"} Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.936173 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.936184 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.937463 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.939607 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.962437 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.964728 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" podStartSLOduration=3.690413982 podStartE2EDuration="35.96471067s" podCreationTimestamp="2026-02-02 07:03:52 +0000 UTC" firstStartedPulling="2026-02-02 07:03:53.602826977 +0000 UTC m=+1058.980094889" lastFinishedPulling="2026-02-02 07:04:25.877123625 +0000 UTC m=+1091.254391577" observedRunningTime="2026-02-02 07:04:27.955194085 +0000 UTC m=+1093.332461997" watchObservedRunningTime="2026-02-02 07:04:27.96471067 +0000 UTC m=+1093.341978582" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.977579 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" podStartSLOduration=-9223372001.877216 podStartE2EDuration="34.977559206s" podCreationTimestamp="2026-02-02 07:03:53 +0000 UTC" firstStartedPulling="2026-02-02 07:03:54.202549229 +0000 UTC m=+1059.579817141" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:27.974442319 +0000 UTC m=+1093.351710231" watchObservedRunningTime="2026-02-02 07:04:27.977559206 +0000 UTC m=+1093.354827118" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.992619 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.999032 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:27 crc kubenswrapper[4842]: I0202 07:04:27.999071 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.030792 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.063451 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078499 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078581 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078636 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.078794 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.105401 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.181866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.181987 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.182039 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.182065 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.182183 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.183862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.184634 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.184650 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.184820 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.208745 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"dnsmasq-dns-757dc6fff9-tttsf\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.258946 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.393961 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.450928 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.486676 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") pod \"11728eb4-1f90-43b9-a299-1c906e4445a2\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.487004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") pod \"11728eb4-1f90-43b9-a299-1c906e4445a2\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.487128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") pod \"11728eb4-1f90-43b9-a299-1c906e4445a2\" (UID: \"11728eb4-1f90-43b9-a299-1c906e4445a2\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.491557 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx" (OuterVolumeSpecName: "kube-api-access-s4hpx") pod "11728eb4-1f90-43b9-a299-1c906e4445a2" (UID: "11728eb4-1f90-43b9-a299-1c906e4445a2"). InnerVolumeSpecName "kube-api-access-s4hpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.530120 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config" (OuterVolumeSpecName: "config") pod "11728eb4-1f90-43b9-a299-1c906e4445a2" (UID: "11728eb4-1f90-43b9-a299-1c906e4445a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.530870 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "11728eb4-1f90-43b9-a299-1c906e4445a2" (UID: "11728eb4-1f90-43b9-a299-1c906e4445a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.588638 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") pod \"b03422f3-6220-40a9-b410-390213ff282e\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.588780 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") pod \"b03422f3-6220-40a9-b410-390213ff282e\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.588837 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") pod \"b03422f3-6220-40a9-b410-390213ff282e\" (UID: \"b03422f3-6220-40a9-b410-390213ff282e\") " Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.589443 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.589457 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/11728eb4-1f90-43b9-a299-1c906e4445a2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.589467 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4hpx\" (UniqueName: \"kubernetes.io/projected/11728eb4-1f90-43b9-a299-1c906e4445a2-kube-api-access-s4hpx\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: W0202 07:04:28.592505 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda768c72b_df6d_463e_b085_996d7b910985.slice/crio-3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5 WatchSource:0}: Error finding container 3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5: Status 404 returned error can't find the container with id 3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.596304 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm" (OuterVolumeSpecName: "kube-api-access-zhzgm") pod "b03422f3-6220-40a9-b410-390213ff282e" (UID: "b03422f3-6220-40a9-b410-390213ff282e"). InnerVolumeSpecName "kube-api-access-zhzgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.596388 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.610676 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.627585 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config" (OuterVolumeSpecName: "config") pod "b03422f3-6220-40a9-b410-390213ff282e" (UID: "b03422f3-6220-40a9-b410-390213ff282e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.633196 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b03422f3-6220-40a9-b410-390213ff282e" (UID: "b03422f3-6220-40a9-b410-390213ff282e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.691616 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.691650 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhzgm\" (UniqueName: \"kubernetes.io/projected/b03422f3-6220-40a9-b410-390213ff282e-kube-api-access-zhzgm\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.691662 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b03422f3-6220-40a9-b410-390213ff282e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.806995 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:28 crc kubenswrapper[4842]: W0202 07:04:28.850759 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a75411c_41b6_4e66_9c29_5dd8e5de211a.slice/crio-6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594 WatchSource:0}: Error finding container 6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594: Status 404 returned error can't find the container with id 6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.942555 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerStarted","Data":"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.943577 4842 generic.go:334] "Generic (PLEG): container finished" podID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" exitCode=0 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.943622 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerDied","Data":"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.943636 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerStarted","Data":"5ea515418db439b7b85e9f81e72d96b594a2f4593445c0e76fd6508fbe9dc808"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.949022 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" event={"ID":"5a75411c-41b6-4e66-9c29-5dd8e5de211a","Type":"ContainerStarted","Data":"6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975521 4842 generic.go:334] "Generic (PLEG): container finished" podID="b03422f3-6220-40a9-b410-390213ff282e" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" exitCode=0 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975586 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerDied","Data":"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975612 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" event={"ID":"b03422f3-6220-40a9-b410-390213ff282e","Type":"ContainerDied","Data":"8546f85ea074aefba993cdb0bf6ad37f1ca8e108781983b99c2bd584652a33a1"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975634 4842 scope.go:117] "RemoveContainer" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.975759 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-744ffd65bc-v87kh" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.996246 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerStarted","Data":"a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.996284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerStarted","Data":"3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.998620 4842 generic.go:334] "Generic (PLEG): container finished" podID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" exitCode=0 Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.999031 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.999099 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerDied","Data":"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686"} Feb 02 07:04:28 crc kubenswrapper[4842]: I0202 07:04:28.999131 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-k5tj8" event={"ID":"11728eb4-1f90-43b9-a299-1c906e4445a2","Type":"ContainerDied","Data":"9eb7e583c84ecb63143f0d1ddff31d06b60ec73935bf9ce5848ad1097f8ea606"} Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.037059 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.051872 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4glck" podStartSLOduration=2.0518566 podStartE2EDuration="2.0518566s" podCreationTimestamp="2026-02-02 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:29.049285817 +0000 UTC m=+1094.426553729" watchObservedRunningTime="2026-02-02 07:04:29.0518566 +0000 UTC m=+1094.429124512" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.093435 4842 scope.go:117] "RemoveContainer" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.106834 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.128511 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.134503 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.135635 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-744ffd65bc-v87kh"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.148800 4842 scope.go:117] "RemoveContainer" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.163437 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a\": container with ID starting with 03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a not found: ID does not exist" containerID="03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.163487 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a"} err="failed to get container status \"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a\": rpc error: code = NotFound desc = could not find container \"03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a\": container with ID starting with 03672f6ca0d8d8d06d6bbefa3ee0d1a92af4902782ce05f0150e6dfe78e8e26a not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.166254 4842 scope.go:117] "RemoveContainer" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.166724 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53\": container with ID starting with 1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53 not found: ID does not exist" containerID="1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.166756 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53"} err="failed to get container status \"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53\": rpc error: code = NotFound desc = could not find container \"1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53\": container with ID starting with 1cda8b1bf4ec8b85bb8b44964c087214d362549894f3526896346652a3603d53 not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.166778 4842 scope.go:117] "RemoveContainer" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.184448 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.216450 4842 scope.go:117] "RemoveContainer" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.218901 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.248929 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-k5tj8"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.248953 4842 scope.go:117] "RemoveContainer" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.250041 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686\": container with ID starting with 42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686 not found: ID does not exist" containerID="42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.250075 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686"} err="failed to get container status \"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686\": rpc error: code = NotFound desc = could not find container \"42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686\": container with ID starting with 42633a7d89c5f8c71cff3452e39e653b67b256211568367dfffae9330cfcf686 not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.250098 4842 scope.go:117] "RemoveContainer" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.252422 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48\": container with ID starting with 6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48 not found: ID does not exist" containerID="6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.252459 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48"} err="failed to get container status \"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48\": rpc error: code = NotFound desc = could not find container \"6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48\": container with ID starting with 6a444f5c393af32e08e046b64f123d9623635f8c3e21df30a65d0ce53326ee48 not found: ID does not exist" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.375722 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376280 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376374 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376466 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376546 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="init" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376641 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376720 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: E0202 07:04:29.376779 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.376830 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.382452 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03422f3-6220-40a9-b410-390213ff282e" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.382713 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" containerName="dnsmasq-dns" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.383995 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386285 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386524 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386650 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.386913 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-kzrkr" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.406786 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.443927 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11728eb4-1f90-43b9-a299-1c906e4445a2" path="/var/lib/kubelet/pods/11728eb4-1f90-43b9-a299-1c906e4445a2/volumes" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.444634 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03422f3-6220-40a9-b410-390213ff282e" path="/var/lib/kubelet/pods/b03422f3-6220-40a9-b410-390213ff282e/volumes" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523378 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523439 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523477 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523492 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523525 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523543 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.523592 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.624927 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.624978 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625043 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625177 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625203 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625242 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.625791 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.626374 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.626394 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.630746 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.632236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.634048 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.645064 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"ovn-northd-0\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.704122 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.883837 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.903756 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.952568 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.954076 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:29 crc kubenswrapper[4842]: I0202 07:04:29.972110 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.015965 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerStarted","Data":"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c"} Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.016845 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.017916 4842 generic.go:334] "Generic (PLEG): container finished" podID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerID="fde966c086e7db7ae0ce126efe437dd36616af251981330e64ff1cbb68eccd77" exitCode=0 Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.017957 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" event={"ID":"5a75411c-41b6-4e66-9c29-5dd8e5de211a","Type":"ContainerDied","Data":"fde966c086e7db7ae0ce126efe437dd36616af251981330e64ff1cbb68eccd77"} Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.038991 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039264 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039294 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039320 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.039348 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.042662 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" podStartSLOduration=3.042651366 podStartE2EDuration="3.042651366s" podCreationTimestamp="2026-02-02 07:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:30.039877657 +0000 UTC m=+1095.417145569" watchObservedRunningTime="2026-02-02 07:04:30.042651366 +0000 UTC m=+1095.419919278" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.141856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.142479 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.142521 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.142597 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143103 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143150 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143643 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143652 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.143900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.161961 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"dnsmasq-dns-6cb545bd4c-hqszm\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: E0202 07:04:30.255301 4842 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 02 07:04:30 crc kubenswrapper[4842]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 07:04:30 crc kubenswrapper[4842]: > podSandboxID="6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594" Feb 02 07:04:30 crc kubenswrapper[4842]: E0202 07:04:30.255464 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:04:30 crc kubenswrapper[4842]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h99h64ch5dbh6dh555h587h64bh5cfh647h5fdh57ch679h9h597h5f5hbch59bh54fh575h566h667h586h5f5h65ch5bch57h68h65ch58bh694h5cfq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2k8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-757dc6fff9-tttsf_openstack(5a75411c-41b6-4e66-9c29-5dd8e5de211a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 02 07:04:30 crc kubenswrapper[4842]: > logger="UnhandledError" Feb 02 07:04:30 crc kubenswrapper[4842]: E0202 07:04:30.256667 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.273875 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.359790 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:04:30 crc kubenswrapper[4842]: I0202 07:04:30.498491 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:04:30 crc kubenswrapper[4842]: W0202 07:04:30.506037 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf57fef97_6ad3_4b54_9859_2b33853f7f6d.slice/crio-7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473 WatchSource:0}: Error finding container 7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473: Status 404 returned error can't find the container with id 7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473 Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.024736 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.030925 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040623 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040683 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040780 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qhjpw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.040931 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.047365 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerStarted","Data":"7d98f1543b01a1b62fffe3edf648bd287b5220b26fe6cebfee732f435b17cba6"} Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.049191 4842 generic.go:334] "Generic (PLEG): container finished" podID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerID="95945828629b93199fdf9c3ec54c43205bcf2d7c6c586860cf34627eab21e480" exitCode=0 Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.049441 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerDied","Data":"95945828629b93199fdf9c3ec54c43205bcf2d7c6c586860cf34627eab21e480"} Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.049548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerStarted","Data":"7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473"} Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.138430 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276003 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276086 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276106 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276147 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276291 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.276310 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377845 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377917 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377938 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.377977 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378009 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378028 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.378149 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.378162 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.378226 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:31.878198516 +0000 UTC m=+1097.255466418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378308 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.378506 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.379096 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.382500 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.399377 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.426911 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.483965 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.524620 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.525316 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerName="init" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.525335 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerName="init" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.525504 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" containerName="init" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.526086 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.528571 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.529185 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.529984 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.547645 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580392 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580446 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580508 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.580536 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") pod \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\" (UID: \"5a75411c-41b6-4e66-9c29-5dd8e5de211a\") " Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.588279 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s" (OuterVolumeSpecName: "kube-api-access-k2k8s") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "kube-api-access-k2k8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.633498 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.634361 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.637887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config" (OuterVolumeSpecName: "config") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.657821 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a75411c-41b6-4e66-9c29-5dd8e5de211a" (UID: "5a75411c-41b6-4e66-9c29-5dd8e5de211a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682592 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682642 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682662 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682837 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.682909 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683076 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683179 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683299 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683316 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683327 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2k8s\" (UniqueName: \"kubernetes.io/projected/5a75411c-41b6-4e66-9c29-5dd8e5de211a-kube-api-access-k2k8s\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683337 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.683349 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a75411c-41b6-4e66-9c29-5dd8e5de211a-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786312 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786436 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786455 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786497 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786520 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.786581 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.787455 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.787573 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.787586 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.790768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.790986 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.793614 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.802514 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"swift-ring-rebalance-kbdxw\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.841941 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:31 crc kubenswrapper[4842]: I0202 07:04:31.887690 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.887909 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.887942 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:31 crc kubenswrapper[4842]: E0202 07:04:31.888005 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:32.887984954 +0000 UTC m=+1098.265252876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.062760 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" event={"ID":"5a75411c-41b6-4e66-9c29-5dd8e5de211a","Type":"ContainerDied","Data":"6b3b3bd6441f4b536256f6e5decf016c5300a5522fe6fb39834290d77db0d594"} Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.062958 4842 scope.go:117] "RemoveContainer" containerID="fde966c086e7db7ae0ce126efe437dd36616af251981330e64ff1cbb68eccd77" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.063063 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757dc6fff9-tttsf" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.078294 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerStarted","Data":"f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3"} Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.078341 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.123922 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" podStartSLOduration=3.123900925 podStartE2EDuration="3.123900925s" podCreationTimestamp="2026-02-02 07:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:32.100476308 +0000 UTC m=+1097.477744220" watchObservedRunningTime="2026-02-02 07:04:32.123900925 +0000 UTC m=+1097.501168837" Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.196331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.242714 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757dc6fff9-tttsf"] Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.411477 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:04:32 crc kubenswrapper[4842]: W0202 07:04:32.418560 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15fb5e79_8dd5_46ae_b8dd_6944cc810350.slice/crio-1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade WatchSource:0}: Error finding container 1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade: Status 404 returned error can't find the container with id 1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade Feb 02 07:04:32 crc kubenswrapper[4842]: I0202 07:04:32.900771 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:32 crc kubenswrapper[4842]: E0202 07:04:32.901339 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:32 crc kubenswrapper[4842]: E0202 07:04:32.901362 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:32 crc kubenswrapper[4842]: E0202 07:04:32.901410 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:34.901393398 +0000 UTC m=+1100.278661310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.114409 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerStarted","Data":"e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0"} Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.114460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerStarted","Data":"6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c"} Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.114670 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.119005 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerStarted","Data":"1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade"} Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.146399 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.621641871 podStartE2EDuration="4.146376162s" podCreationTimestamp="2026-02-02 07:04:29 +0000 UTC" firstStartedPulling="2026-02-02 07:04:30.372312536 +0000 UTC m=+1095.749580448" lastFinishedPulling="2026-02-02 07:04:31.897046827 +0000 UTC m=+1097.274314739" observedRunningTime="2026-02-02 07:04:33.135860733 +0000 UTC m=+1098.513128645" watchObservedRunningTime="2026-02-02 07:04:33.146376162 +0000 UTC m=+1098.523644074" Feb 02 07:04:33 crc kubenswrapper[4842]: I0202 07:04:33.442572 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a75411c-41b6-4e66-9c29-5dd8e5de211a" path="/var/lib/kubelet/pods/5a75411c-41b6-4e66-9c29-5dd8e5de211a/volumes" Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.948885 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:34 crc kubenswrapper[4842]: E0202 07:04:34.949064 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:34 crc kubenswrapper[4842]: E0202 07:04:34.949335 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:34 crc kubenswrapper[4842]: E0202 07:04:34.949392 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:38.949374026 +0000 UTC m=+1104.326641938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.962775 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.974710 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.978725 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 07:04:34 crc kubenswrapper[4842]: I0202 07:04:34.979505 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.052077 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.052321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.155110 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.155185 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.156258 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.177100 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"root-account-create-update-qm2z9\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:35 crc kubenswrapper[4842]: I0202 07:04:35.299935 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:36 crc kubenswrapper[4842]: I0202 07:04:36.300022 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.159250 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerStarted","Data":"be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8"} Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.160603 4842 generic.go:334] "Generic (PLEG): container finished" podID="19378e36-9154-451c-88fe-dab4522aa0dc" containerID="fd930d739c77e2c60500ea7cab9f16a6ba8a914130efb858b41ff112a5549c6c" exitCode=0 Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.160654 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm2z9" event={"ID":"19378e36-9154-451c-88fe-dab4522aa0dc","Type":"ContainerDied","Data":"fd930d739c77e2c60500ea7cab9f16a6ba8a914130efb858b41ff112a5549c6c"} Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.160801 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm2z9" event={"ID":"19378e36-9154-451c-88fe-dab4522aa0dc","Type":"ContainerStarted","Data":"289357a68298a49918f4a3d7e9df807fcf5158b46465e992e8a6e7dcb82706d2"} Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.177500 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kbdxw" podStartSLOduration=2.683563434 podStartE2EDuration="6.177482851s" podCreationTimestamp="2026-02-02 07:04:31 +0000 UTC" firstStartedPulling="2026-02-02 07:04:32.422384948 +0000 UTC m=+1097.799652860" lastFinishedPulling="2026-02-02 07:04:35.916304365 +0000 UTC m=+1101.293572277" observedRunningTime="2026-02-02 07:04:37.173277537 +0000 UTC m=+1102.550545459" watchObservedRunningTime="2026-02-02 07:04:37.177482851 +0000 UTC m=+1102.554750763" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.600760 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.601798 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.610964 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.703259 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.703578 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.713671 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.714578 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.717239 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.730470 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805602 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805722 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.805807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.806572 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.826268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"keystone-db-create-6ctcq\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.907662 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.907799 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.908525 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.915010 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.920571 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.921798 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.926868 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"keystone-0ec7-account-create-update-x5rkz\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.942333 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.943714 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.946791 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.952716 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.974877 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:04:37 crc kubenswrapper[4842]: I0202 07:04:37.999187 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009428 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009518 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009594 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.009833 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.103253 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111417 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111466 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111536 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.111594 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.112361 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.113337 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.130427 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"placement-db-create-p28sd\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.132324 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"placement-85ce-account-create-update-rxmcp\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.290878 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.291832 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.305718 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.315895 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.315999 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.319510 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.340427 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.383708 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.396532 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.397784 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: W0202 07:04:38.399281 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4450e400_557b_4092_8f73_124910137dc4.slice/crio-d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450 WatchSource:0}: Error finding container d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450: Status 404 returned error can't find the container with id d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450 Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.402723 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.420734 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.420818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.421025 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.421070 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.423698 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.442575 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"glance-db-create-vsjtz\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.472555 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.522833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.523359 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.526734 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.548691 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"glance-2348-account-create-update-l9hwl\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.592360 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.605429 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.624665 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") pod \"19378e36-9154-451c-88fe-dab4522aa0dc\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.624761 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") pod \"19378e36-9154-451c-88fe-dab4522aa0dc\" (UID: \"19378e36-9154-451c-88fe-dab4522aa0dc\") " Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.625321 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19378e36-9154-451c-88fe-dab4522aa0dc" (UID: "19378e36-9154-451c-88fe-dab4522aa0dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.625639 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.629613 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp" (OuterVolumeSpecName: "kube-api-access-f5wsp") pod "19378e36-9154-451c-88fe-dab4522aa0dc" (UID: "19378e36-9154-451c-88fe-dab4522aa0dc"). InnerVolumeSpecName "kube-api-access-f5wsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.727104 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wsp\" (UniqueName: \"kubernetes.io/projected/19378e36-9154-451c-88fe-dab4522aa0dc-kube-api-access-f5wsp\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.727402 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19378e36-9154-451c-88fe-dab4522aa0dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.740016 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.805053 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.917584 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:04:38 crc kubenswrapper[4842]: I0202 07:04:38.930857 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.032783 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:39 crc kubenswrapper[4842]: E0202 07:04:39.032956 4842 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 02 07:04:39 crc kubenswrapper[4842]: E0202 07:04:39.032971 4842 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 02 07:04:39 crc kubenswrapper[4842]: E0202 07:04:39.033018 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift podName:928a8c7e-d835-4795-8197-1861e4fd8f83 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:47.033004758 +0000 UTC m=+1112.410272670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift") pod "swift-storage-0" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83") : configmap "swift-ring-files" not found Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.176471 4842 generic.go:334] "Generic (PLEG): container finished" podID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerID="af9aab2a24cfc4f124984122e483edf359b136da9788f63d0af01da2b636aa44" exitCode=0 Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.176541 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ec7-account-create-update-x5rkz" event={"ID":"6601a68f-34a5-4629-ac74-97cb14e809f3","Type":"ContainerDied","Data":"af9aab2a24cfc4f124984122e483edf359b136da9788f63d0af01da2b636aa44"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.176778 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ec7-account-create-update-x5rkz" event={"ID":"6601a68f-34a5-4629-ac74-97cb14e809f3","Type":"ContainerStarted","Data":"1bfda80e82935159993cfdb80d57362500543b4c3a630820faa6ff4dbddd1689"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.178526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qm2z9" event={"ID":"19378e36-9154-451c-88fe-dab4522aa0dc","Type":"ContainerDied","Data":"289357a68298a49918f4a3d7e9df807fcf5158b46465e992e8a6e7dcb82706d2"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.178579 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289357a68298a49918f4a3d7e9df807fcf5158b46465e992e8a6e7dcb82706d2" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.178583 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qm2z9" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.180062 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerStarted","Data":"8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.180102 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerStarted","Data":"5a286490efae1b2fcfd3289842091a1573875773e0e26817daf7cfeecd21545c"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.181181 4842 generic.go:334] "Generic (PLEG): container finished" podID="4450e400-557b-4092-8f73-124910137dc4" containerID="1f6dfdf20fb08a168081a064432d989dfc5b7013b8511778f8a6195c000accc0" exitCode=0 Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.181249 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ctcq" event={"ID":"4450e400-557b-4092-8f73-124910137dc4","Type":"ContainerDied","Data":"1f6dfdf20fb08a168081a064432d989dfc5b7013b8511778f8a6195c000accc0"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.181268 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ctcq" event={"ID":"4450e400-557b-4092-8f73-124910137dc4","Type":"ContainerStarted","Data":"d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.182672 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerStarted","Data":"d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.182708 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerStarted","Data":"15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.184655 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerStarted","Data":"d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.184686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerStarted","Data":"b54b449d9636044ec4aa3fc42dc49895933f5c104686edd5988476072faf577b"} Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.229853 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-p28sd" podStartSLOduration=2.229829287 podStartE2EDuration="2.229829287s" podCreationTimestamp="2026-02-02 07:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:39.208989234 +0000 UTC m=+1104.586257156" watchObservedRunningTime="2026-02-02 07:04:39.229829287 +0000 UTC m=+1104.607097199" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.239006 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-85ce-account-create-update-rxmcp" podStartSLOduration=2.238988703 podStartE2EDuration="2.238988703s" podCreationTimestamp="2026-02-02 07:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:39.231819116 +0000 UTC m=+1104.609087028" watchObservedRunningTime="2026-02-02 07:04:39.238988703 +0000 UTC m=+1104.616256605" Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.266566 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vsjtz" podStartSLOduration=1.266543201 podStartE2EDuration="1.266543201s" podCreationTimestamp="2026-02-02 07:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:39.261266971 +0000 UTC m=+1104.638534883" watchObservedRunningTime="2026-02-02 07:04:39.266543201 +0000 UTC m=+1104.643811113" Feb 02 07:04:39 crc kubenswrapper[4842]: W0202 07:04:39.274308 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef83800c_79dc_4cfa_9f7c_194a44995d12.slice/crio-45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd WatchSource:0}: Error finding container 45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd: Status 404 returned error can't find the container with id 45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd Feb 02 07:04:39 crc kubenswrapper[4842]: I0202 07:04:39.279525 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.212416 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerID="8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.212517 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerDied","Data":"8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.216578 4842 generic.go:334] "Generic (PLEG): container finished" podID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerID="5a4746c338d6ea60edc25a0f516095639bc028a5f96d859500d9f30d568afd7f" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.216682 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-l9hwl" event={"ID":"ef83800c-79dc-4cfa-9f7c-194a44995d12","Type":"ContainerDied","Data":"5a4746c338d6ea60edc25a0f516095639bc028a5f96d859500d9f30d568afd7f"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.216717 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-l9hwl" event={"ID":"ef83800c-79dc-4cfa-9f7c-194a44995d12","Type":"ContainerStarted","Data":"45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.231891 4842 generic.go:334] "Generic (PLEG): container finished" podID="3f4b2578-8a31-4097-afd3-04bae6621094" containerID="d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.232044 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerDied","Data":"d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.247757 4842 generic.go:334] "Generic (PLEG): container finished" podID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerID="d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0" exitCode=0 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.248049 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerDied","Data":"d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0"} Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.275415 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.408929 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.409142 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" containerID="cri-o://b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" gracePeriod=10 Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.723699 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.782829 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") pod \"6601a68f-34a5-4629-ac74-97cb14e809f3\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.782874 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") pod \"6601a68f-34a5-4629-ac74-97cb14e809f3\" (UID: \"6601a68f-34a5-4629-ac74-97cb14e809f3\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.784619 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6601a68f-34a5-4629-ac74-97cb14e809f3" (UID: "6601a68f-34a5-4629-ac74-97cb14e809f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.794367 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4" (OuterVolumeSpecName: "kube-api-access-kcbc4") pod "6601a68f-34a5-4629-ac74-97cb14e809f3" (UID: "6601a68f-34a5-4629-ac74-97cb14e809f3"). InnerVolumeSpecName "kube-api-access-kcbc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.806518 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.884691 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") pod \"4450e400-557b-4092-8f73-124910137dc4\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.884857 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") pod \"4450e400-557b-4092-8f73-124910137dc4\" (UID: \"4450e400-557b-4092-8f73-124910137dc4\") " Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.885203 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6601a68f-34a5-4629-ac74-97cb14e809f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.885233 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcbc4\" (UniqueName: \"kubernetes.io/projected/6601a68f-34a5-4629-ac74-97cb14e809f3-kube-api-access-kcbc4\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.888007 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7" (OuterVolumeSpecName: "kube-api-access-dwnf7") pod "4450e400-557b-4092-8f73-124910137dc4" (UID: "4450e400-557b-4092-8f73-124910137dc4"). InnerVolumeSpecName "kube-api-access-dwnf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.889932 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4450e400-557b-4092-8f73-124910137dc4" (UID: "4450e400-557b-4092-8f73-124910137dc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.986559 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwnf7\" (UniqueName: \"kubernetes.io/projected/4450e400-557b-4092-8f73-124910137dc4-kube-api-access-dwnf7\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:40 crc kubenswrapper[4842]: I0202 07:04:40.986862 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4450e400-557b-4092-8f73-124910137dc4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.052785 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087750 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087789 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087880 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.087977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.091395 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc" (OuterVolumeSpecName: "kube-api-access-w6hzc") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "kube-api-access-w6hzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.141471 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config" (OuterVolumeSpecName: "config") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: E0202 07:04:41.150415 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc podName:50ef0678-fa8e-46f0-87b3-d4cd540ca293 nodeName:}" failed. No retries permitted until 2026-02-02 07:04:41.650393206 +0000 UTC m=+1107.027661118 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293") : error deleting /var/lib/kubelet/pods/50ef0678-fa8e-46f0-87b3-d4cd540ca293/volume-subpaths: remove /var/lib/kubelet/pods/50ef0678-fa8e-46f0-87b3-d4cd540ca293/volume-subpaths: no such file or directory Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.150682 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.189539 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.189568 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.189578 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6hzc\" (UniqueName: \"kubernetes.io/projected/50ef0678-fa8e-46f0-87b3-d4cd540ca293-kube-api-access-w6hzc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.255162 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0ec7-account-create-update-x5rkz" event={"ID":"6601a68f-34a5-4629-ac74-97cb14e809f3","Type":"ContainerDied","Data":"1bfda80e82935159993cfdb80d57362500543b4c3a630820faa6ff4dbddd1689"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.255195 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfda80e82935159993cfdb80d57362500543b4c3a630820faa6ff4dbddd1689" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.255273 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-x5rkz" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266682 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266742 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerDied","Data":"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266784 4842 scope.go:117] "RemoveContainer" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.266584 4842 generic.go:334] "Generic (PLEG): container finished" podID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" exitCode=0 Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.267104 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794868bd45-ljcbj" event={"ID":"50ef0678-fa8e-46f0-87b3-d4cd540ca293","Type":"ContainerDied","Data":"5ea515418db439b7b85e9f81e72d96b594a2f4593445c0e76fd6508fbe9dc808"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.269695 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6ctcq" event={"ID":"4450e400-557b-4092-8f73-124910137dc4","Type":"ContainerDied","Data":"d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450"} Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.269723 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8feadef768195e707bb4429851d853709421e83367cc73a361512bc437b5450" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.269760 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6ctcq" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.289255 4842 scope.go:117] "RemoveContainer" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.311185 4842 scope.go:117] "RemoveContainer" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" Feb 02 07:04:41 crc kubenswrapper[4842]: E0202 07:04:41.313792 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c\": container with ID starting with b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c not found: ID does not exist" containerID="b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.313839 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c"} err="failed to get container status \"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c\": rpc error: code = NotFound desc = could not find container \"b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c\": container with ID starting with b9a0d2e6281bc51140d03bbdf39c9959c34f9011131e35574e7085eb36300b4c not found: ID does not exist" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.313873 4842 scope.go:117] "RemoveContainer" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" Feb 02 07:04:41 crc kubenswrapper[4842]: E0202 07:04:41.314267 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b\": container with ID starting with 9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b not found: ID does not exist" containerID="9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.314319 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b"} err="failed to get container status \"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b\": rpc error: code = NotFound desc = could not find container \"9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b\": container with ID starting with 9256a22e336903a02a75fd334630a8b5dba0a0037c179f024a9a59492a8a565b not found: ID does not exist" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.376750 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.382173 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qm2z9"] Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.446970 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" path="/var/lib/kubelet/pods/19378e36-9154-451c-88fe-dab4522aa0dc/volumes" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.551174 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.595592 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") pod \"cf6c9856-8e0e-462e-a2bb-b21847078b54\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.595767 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") pod \"cf6c9856-8e0e-462e-a2bb-b21847078b54\" (UID: \"cf6c9856-8e0e-462e-a2bb-b21847078b54\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.596620 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf6c9856-8e0e-462e-a2bb-b21847078b54" (UID: "cf6c9856-8e0e-462e-a2bb-b21847078b54"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.604482 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j" (OuterVolumeSpecName: "kube-api-access-xns4j") pod "cf6c9856-8e0e-462e-a2bb-b21847078b54" (UID: "cf6c9856-8e0e-462e-a2bb-b21847078b54"). InnerVolumeSpecName "kube-api-access-xns4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.666125 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.674366 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.688606 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.696788 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") pod \"3f4b2578-8a31-4097-afd3-04bae6621094\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.696931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") pod \"3f4b2578-8a31-4097-afd3-04bae6621094\" (UID: \"3f4b2578-8a31-4097-afd3-04bae6621094\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697007 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") pod \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\" (UID: \"50ef0678-fa8e-46f0-87b3-d4cd540ca293\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697397 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xns4j\" (UniqueName: \"kubernetes.io/projected/cf6c9856-8e0e-462e-a2bb-b21847078b54-kube-api-access-xns4j\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697429 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf6c9856-8e0e-462e-a2bb-b21847078b54-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.697606 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f4b2578-8a31-4097-afd3-04bae6621094" (UID: "3f4b2578-8a31-4097-afd3-04bae6621094"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.698010 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50ef0678-fa8e-46f0-87b3-d4cd540ca293" (UID: "50ef0678-fa8e-46f0-87b3-d4cd540ca293"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.700334 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg" (OuterVolumeSpecName: "kube-api-access-4tbjg") pod "3f4b2578-8a31-4097-afd3-04bae6621094" (UID: "3f4b2578-8a31-4097-afd3-04bae6621094"). InnerVolumeSpecName "kube-api-access-4tbjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.798862 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") pod \"31bf41ed-98c7-44ed-abba-93b74a546e71\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.798976 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") pod \"31bf41ed-98c7-44ed-abba-93b74a546e71\" (UID: \"31bf41ed-98c7-44ed-abba-93b74a546e71\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799039 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") pod \"ef83800c-79dc-4cfa-9f7c-194a44995d12\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799064 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") pod \"ef83800c-79dc-4cfa-9f7c-194a44995d12\" (UID: \"ef83800c-79dc-4cfa-9f7c-194a44995d12\") " Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799335 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31bf41ed-98c7-44ed-abba-93b74a546e71" (UID: "31bf41ed-98c7-44ed-abba-93b74a546e71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799392 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tbjg\" (UniqueName: \"kubernetes.io/projected/3f4b2578-8a31-4097-afd3-04bae6621094-kube-api-access-4tbjg\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799404 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f4b2578-8a31-4097-afd3-04bae6621094-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799414 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50ef0678-fa8e-46f0-87b3-d4cd540ca293-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.799578 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef83800c-79dc-4cfa-9f7c-194a44995d12" (UID: "ef83800c-79dc-4cfa-9f7c-194a44995d12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.802600 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl" (OuterVolumeSpecName: "kube-api-access-hxhpl") pod "ef83800c-79dc-4cfa-9f7c-194a44995d12" (UID: "ef83800c-79dc-4cfa-9f7c-194a44995d12"). InnerVolumeSpecName "kube-api-access-hxhpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.802630 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj" (OuterVolumeSpecName: "kube-api-access-t7svj") pod "31bf41ed-98c7-44ed-abba-93b74a546e71" (UID: "31bf41ed-98c7-44ed-abba-93b74a546e71"). InnerVolumeSpecName "kube-api-access-t7svj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900588 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef83800c-79dc-4cfa-9f7c-194a44995d12-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900635 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxhpl\" (UniqueName: \"kubernetes.io/projected/ef83800c-79dc-4cfa-9f7c-194a44995d12-kube-api-access-hxhpl\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900654 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31bf41ed-98c7-44ed-abba-93b74a546e71-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.900669 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7svj\" (UniqueName: \"kubernetes.io/projected/31bf41ed-98c7-44ed-abba-93b74a546e71-kube-api-access-t7svj\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.960762 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:41 crc kubenswrapper[4842]: I0202 07:04:41.973628 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794868bd45-ljcbj"] Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.146463 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.146540 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.146601 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.147527 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.147629 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62" gracePeriod=600 Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.292578 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62" exitCode=0 Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.292651 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.292728 4842 scope.go:117] "RemoveContainer" containerID="409dfa164f76008135fd93bb209c464e3603214d524a9798b15a0c8226203f93" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.297876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-l9hwl" event={"ID":"ef83800c-79dc-4cfa-9f7c-194a44995d12","Type":"ContainerDied","Data":"45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.297958 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45bfcdc7da5be52f168e943bba23476495a7050157d4308d66afb8530a3e96bd" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.297899 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-l9hwl" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.299796 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-rxmcp" event={"ID":"3f4b2578-8a31-4097-afd3-04bae6621094","Type":"ContainerDied","Data":"15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.300001 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.300064 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-rxmcp" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.302930 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p28sd" event={"ID":"31bf41ed-98c7-44ed-abba-93b74a546e71","Type":"ContainerDied","Data":"b54b449d9636044ec4aa3fc42dc49895933f5c104686edd5988476072faf577b"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.303015 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54b449d9636044ec4aa3fc42dc49895933f5c104686edd5988476072faf577b" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.303112 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p28sd" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.307872 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vsjtz" event={"ID":"cf6c9856-8e0e-462e-a2bb-b21847078b54","Type":"ContainerDied","Data":"5a286490efae1b2fcfd3289842091a1573875773e0e26817daf7cfeecd21545c"} Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.307903 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a286490efae1b2fcfd3289842091a1573875773e0e26817daf7cfeecd21545c" Feb 02 07:04:42 crc kubenswrapper[4842]: I0202 07:04:42.307962 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vsjtz" Feb 02 07:04:42 crc kubenswrapper[4842]: E0202 07:04:42.533999 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f4b2578_8a31_4097_afd3_04bae6621094.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f4b2578_8a31_4097_afd3_04bae6621094.slice/crio-15cb3839393a80afe35c025ac6d4f112e276e4e995c843796ae616facfee62f2\": RecentStats: unable to find data in memory cache]" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.318682 4842 generic.go:334] "Generic (PLEG): container finished" podID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerID="be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8" exitCode=0 Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.318774 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerDied","Data":"be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8"} Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.323852 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49"} Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.461119 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" path="/var/lib/kubelet/pods/50ef0678-fa8e-46f0-87b3-d4cd540ca293/volumes" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634008 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634384 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634400 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634418 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634426 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634444 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634453 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634463 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634471 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634485 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="init" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634493 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="init" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634506 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4450e400-557b-4092-8f73-124910137dc4" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634514 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4450e400-557b-4092-8f73-124910137dc4" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634524 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634532 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634548 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634555 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: E0202 07:04:43.634575 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634583 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634749 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ef0678-fa8e-46f0-87b3-d4cd540ca293" containerName="dnsmasq-dns" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634762 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634774 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4450e400-557b-4092-8f73-124910137dc4" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634785 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634794 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634806 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634824 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" containerName="mariadb-database-create" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.634839 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="19378e36-9154-451c-88fe-dab4522aa0dc" containerName="mariadb-account-create-update" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.635422 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.638984 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.646627 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpq5h" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.650024 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.729850 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.729919 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.729981 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.730148 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831774 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831823 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831853 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.831938 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.838569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.839991 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.851348 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.852034 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"glance-db-sync-7qxb9\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:43 crc kubenswrapper[4842]: I0202 07:04:43.968098 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.338156 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:04:44 crc kubenswrapper[4842]: W0202 07:04:44.343338 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8cd42ce_4a62_486b_9571_58d789ca2d38.slice/crio-6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52 WatchSource:0}: Error finding container 6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52: Status 404 returned error can't find the container with id 6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52 Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.582115 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.646836 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.646923 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.646972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647021 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647097 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647141 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") pod \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\" (UID: \"15fb5e79-8dd5-46ae-b8dd-6944cc810350\") " Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.647917 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.648255 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.653284 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn" (OuterVolumeSpecName: "kube-api-access-p4zkn") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "kube-api-access-p4zkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.655514 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.669496 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts" (OuterVolumeSpecName: "scripts") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.670341 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.671446 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "15fb5e79-8dd5-46ae-b8dd-6944cc810350" (UID: "15fb5e79-8dd5-46ae-b8dd-6944cc810350"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748923 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748964 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748977 4842 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15fb5e79-8dd5-46ae-b8dd-6944cc810350-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.748992 4842 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15fb5e79-8dd5-46ae-b8dd-6944cc810350-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.749004 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4zkn\" (UniqueName: \"kubernetes.io/projected/15fb5e79-8dd5-46ae-b8dd-6944cc810350-kube-api-access-p4zkn\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.749017 4842 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:44 crc kubenswrapper[4842]: I0202 07:04:44.749028 4842 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15fb5e79-8dd5-46ae-b8dd-6944cc810350-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.343432 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kbdxw" event={"ID":"15fb5e79-8dd5-46ae-b8dd-6944cc810350","Type":"ContainerDied","Data":"1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade"} Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.343465 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kbdxw" Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.343512 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa25f7ce59beabc543eaca2151f7fe5af27722fc7175abe6c90cab123aefade" Feb 02 07:04:45 crc kubenswrapper[4842]: I0202 07:04:45.344759 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerStarted","Data":"6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52"} Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.381176 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:04:46 crc kubenswrapper[4842]: E0202 07:04:46.381921 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerName="swift-ring-rebalance" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.381938 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerName="swift-ring-rebalance" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.382138 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" containerName="swift-ring-rebalance" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.382770 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.386595 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.398547 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.474529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.474622 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.576877 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.577125 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.578000 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.601003 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"root-account-create-update-h2lm5\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:46 crc kubenswrapper[4842]: I0202 07:04:46.707836 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.084639 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.093109 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"swift-storage-0\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " pod="openstack/swift-storage-0" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.146980 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:04:47 crc kubenswrapper[4842]: W0202 07:04:47.151501 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0cbe107_ad1a_47aa_9b91_4a08c8b712fb.slice/crio-81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6 WatchSource:0}: Error finding container 81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6: Status 404 returned error can't find the container with id 81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6 Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.247536 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.366189 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerStarted","Data":"baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e"} Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.366615 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerStarted","Data":"81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6"} Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.386200 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-h2lm5" podStartSLOduration=1.386185854 podStartE2EDuration="1.386185854s" podCreationTimestamp="2026-02-02 07:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:04:47.381270443 +0000 UTC m=+1112.758538365" watchObservedRunningTime="2026-02-02 07:04:47.386185854 +0000 UTC m=+1112.763453766" Feb 02 07:04:47 crc kubenswrapper[4842]: I0202 07:04:47.802623 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:04:48 crc kubenswrapper[4842]: I0202 07:04:48.377047 4842 generic.go:334] "Generic (PLEG): container finished" podID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerID="baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e" exitCode=0 Feb 02 07:04:48 crc kubenswrapper[4842]: I0202 07:04:48.377105 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerDied","Data":"baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e"} Feb 02 07:04:48 crc kubenswrapper[4842]: I0202 07:04:48.379259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"ab889a1e60a176a5157cbf2492af02320a93e4b8f19cc77b84445a221a0d1b90"} Feb 02 07:04:49 crc kubenswrapper[4842]: I0202 07:04:49.387113 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766"} Feb 02 07:04:49 crc kubenswrapper[4842]: I0202 07:04:49.782257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 02 07:04:53 crc kubenswrapper[4842]: I0202 07:04:53.820006 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sgwrm" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" probeResult="failure" output=< Feb 02 07:04:53 crc kubenswrapper[4842]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 02 07:04:53 crc kubenswrapper[4842]: > Feb 02 07:04:53 crc kubenswrapper[4842]: I0202 07:04:53.837796 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:53 crc kubenswrapper[4842]: I0202 07:04:53.854352 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.082799 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.084032 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.086585 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.112058 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.262868 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.262970 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263029 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263048 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263065 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.263195 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365096 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365156 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365194 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365228 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365244 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365266 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365622 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365682 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.365754 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.366703 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.367591 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.404110 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"ovn-controller-sgwrm-config-hhzx8\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:54 crc kubenswrapper[4842]: I0202 07:04:54.413705 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.450442 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-h2lm5" event={"ID":"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb","Type":"ContainerDied","Data":"81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6"} Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.450767 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b2a5546beb19ff9cd7c9100f20f94d4b1c03559214b6eacc4130c8dc3472a6" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.491557 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.594615 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") pod \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.594810 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") pod \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\" (UID: \"e0cbe107-ad1a-47aa-9b91-4a08c8b712fb\") " Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.595614 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" (UID: "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.601719 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv" (OuterVolumeSpecName: "kube-api-access-59nqv") pod "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" (UID: "e0cbe107-ad1a-47aa-9b91-4a08c8b712fb"). InnerVolumeSpecName "kube-api-access-59nqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.696438 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59nqv\" (UniqueName: \"kubernetes.io/projected/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-kube-api-access-59nqv\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.696462 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:55 crc kubenswrapper[4842]: I0202 07:04:55.885582 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:04:55 crc kubenswrapper[4842]: W0202 07:04:55.895425 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36369d86_4106_4626_9771_c63ca46e2b3e.slice/crio-524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5 WatchSource:0}: Error finding container 524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5: Status 404 returned error can't find the container with id 524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5 Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.494510 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerStarted","Data":"f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.496450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm-config-hhzx8" event={"ID":"36369d86-4106-4626-9771-c63ca46e2b3e","Type":"ContainerStarted","Data":"524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499631 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499653 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-h2lm5" Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499663 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b"} Feb 02 07:04:56 crc kubenswrapper[4842]: I0202 07:04:56.499675 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594"} Feb 02 07:04:57 crc kubenswrapper[4842]: I0202 07:04:57.509306 4842 generic.go:334] "Generic (PLEG): container finished" podID="36369d86-4106-4626-9771-c63ca46e2b3e" containerID="59526756b474c2762ebc0f7a6578c91c40cc272db00fa72f3384382706ed53e2" exitCode=0 Feb 02 07:04:57 crc kubenswrapper[4842]: I0202 07:04:57.511381 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm-config-hhzx8" event={"ID":"36369d86-4106-4626-9771-c63ca46e2b3e","Type":"ContainerDied","Data":"59526756b474c2762ebc0f7a6578c91c40cc272db00fa72f3384382706ed53e2"} Feb 02 07:04:57 crc kubenswrapper[4842]: I0202 07:04:57.532572 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7qxb9" podStartSLOduration=3.371759485 podStartE2EDuration="14.532555972s" podCreationTimestamp="2026-02-02 07:04:43 +0000 UTC" firstStartedPulling="2026-02-02 07:04:44.349966372 +0000 UTC m=+1109.727234284" lastFinishedPulling="2026-02-02 07:04:55.510762829 +0000 UTC m=+1120.888030771" observedRunningTime="2026-02-02 07:04:57.528190325 +0000 UTC m=+1122.905458237" watchObservedRunningTime="2026-02-02 07:04:57.532555972 +0000 UTC m=+1122.909823884" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531585 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531609 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.531628 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c"} Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.834940 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sgwrm" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.921549 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.959716 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.959901 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.959973 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960012 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960058 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960094 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") pod \"36369d86-4106-4626-9771-c63ca46e2b3e\" (UID: \"36369d86-4106-4626-9771-c63ca46e2b3e\") " Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.960475 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run" (OuterVolumeSpecName: "var-run") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.961421 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts" (OuterVolumeSpecName: "scripts") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.961795 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.961878 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.962528 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:04:58 crc kubenswrapper[4842]: I0202 07:04:58.979121 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk" (OuterVolumeSpecName: "kube-api-access-98sxk") pod "36369d86-4106-4626-9771-c63ca46e2b3e" (UID: "36369d86-4106-4626-9771-c63ca46e2b3e"). InnerVolumeSpecName "kube-api-access-98sxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061855 4842 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061883 4842 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061894 4842 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061903 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36369d86-4106-4626-9771-c63ca46e2b3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061912 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98sxk\" (UniqueName: \"kubernetes.io/projected/36369d86-4106-4626-9771-c63ca46e2b3e-kube-api-access-98sxk\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.061920 4842 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36369d86-4106-4626-9771-c63ca46e2b3e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.539889 4842 generic.go:334] "Generic (PLEG): container finished" podID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerID="15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef" exitCode=0 Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.540132 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerDied","Data":"15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef"} Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.561551 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391"} Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.564208 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm-config-hhzx8" event={"ID":"36369d86-4106-4626-9771-c63ca46e2b3e","Type":"ContainerDied","Data":"524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5"} Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.564263 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="524dac2f02dc48d1fd595c5281320196026031f7d307b89e14bd1fb64ef0c5c5" Feb 02 07:04:59 crc kubenswrapper[4842]: I0202 07:04:59.564362 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm-config-hhzx8" Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.048403 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.056731 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sgwrm-config-hhzx8"] Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.574578 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerStarted","Data":"3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.575148 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580431 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580457 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580757 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580771 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.580779 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.582080 4842 generic.go:334] "Generic (PLEG): container finished" podID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" exitCode=0 Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.582114 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerDied","Data":"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b"} Feb 02 07:05:00 crc kubenswrapper[4842]: I0202 07:05:00.607507 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.160761207 podStartE2EDuration="1m7.607487198s" podCreationTimestamp="2026-02-02 07:03:53 +0000 UTC" firstStartedPulling="2026-02-02 07:03:55.432933717 +0000 UTC m=+1060.810201629" lastFinishedPulling="2026-02-02 07:04:25.879659698 +0000 UTC m=+1091.256927620" observedRunningTime="2026-02-02 07:05:00.603266134 +0000 UTC m=+1125.980534066" watchObservedRunningTime="2026-02-02 07:05:00.607487198 +0000 UTC m=+1125.984755110" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.443093 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" path="/var/lib/kubelet/pods/36369d86-4106-4626-9771-c63ca46e2b3e/volumes" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.597060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerStarted","Data":"a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060"} Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.600065 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerStarted","Data":"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d"} Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.600428 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.665617 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.186365561 podStartE2EDuration="32.665600502s" podCreationTimestamp="2026-02-02 07:04:29 +0000 UTC" firstStartedPulling="2026-02-02 07:04:47.805887222 +0000 UTC m=+1113.183155134" lastFinishedPulling="2026-02-02 07:04:59.285122163 +0000 UTC m=+1124.662390075" observedRunningTime="2026-02-02 07:05:01.658035565 +0000 UTC m=+1127.035303477" watchObservedRunningTime="2026-02-02 07:05:01.665600502 +0000 UTC m=+1127.042868414" Feb 02 07:05:01 crc kubenswrapper[4842]: I0202 07:05:01.685267 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371967.169525 podStartE2EDuration="1m9.685249946s" podCreationTimestamp="2026-02-02 07:03:52 +0000 UTC" firstStartedPulling="2026-02-02 07:03:54.578804137 +0000 UTC m=+1059.956072049" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:01.679479484 +0000 UTC m=+1127.056747436" watchObservedRunningTime="2026-02-02 07:05:01.685249946 +0000 UTC m=+1127.062517858" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051406 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:02 crc kubenswrapper[4842]: E0202 07:05:02.051714 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" containerName="ovn-config" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051725 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" containerName="ovn-config" Feb 02 07:05:02 crc kubenswrapper[4842]: E0202 07:05:02.051753 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerName="mariadb-account-create-update" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051759 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerName="mariadb-account-create-update" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051925 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="36369d86-4106-4626-9771-c63ca46e2b3e" containerName="ovn-config" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.051946 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" containerName="mariadb-account-create-update" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.052835 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.056351 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.067652 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215244 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215350 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215389 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215531 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.215597 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318009 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318131 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318189 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318295 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318514 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.318624 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.319322 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.319677 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.320024 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.320265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.320534 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.347641 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"dnsmasq-dns-8467b54bcc-fn7dr\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.371569 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.610577 4842 generic.go:334] "Generic (PLEG): container finished" podID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerID="f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8" exitCode=0 Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.610644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerDied","Data":"f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8"} Feb 02 07:05:02 crc kubenswrapper[4842]: I0202 07:05:02.676341 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:03 crc kubenswrapper[4842]: E0202 07:05:03.056593 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57953a5b_9fe5_49e3_bc39_7ac347467088.slice/crio-e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:05:03 crc kubenswrapper[4842]: I0202 07:05:03.619301 4842 generic.go:334] "Generic (PLEG): container finished" podID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" exitCode=0 Feb 02 07:05:03 crc kubenswrapper[4842]: I0202 07:05:03.620332 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerDied","Data":"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f"} Feb 02 07:05:03 crc kubenswrapper[4842]: I0202 07:05:03.620357 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerStarted","Data":"45616b816ffed6aadd7c2954b933ac19362083c5815ff3769fd5f6861a68956c"} Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.012114 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145262 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145339 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145514 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.145579 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") pod \"b8cd42ce-4a62-486b-9571-58d789ca2d38\" (UID: \"b8cd42ce-4a62-486b-9571-58d789ca2d38\") " Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.151073 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh" (OuterVolumeSpecName: "kube-api-access-xk4lh") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "kube-api-access-xk4lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.151195 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.166477 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.192047 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data" (OuterVolumeSpecName: "config-data") pod "b8cd42ce-4a62-486b-9571-58d789ca2d38" (UID: "b8cd42ce-4a62-486b-9571-58d789ca2d38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247904 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247953 4842 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247972 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8cd42ce-4a62-486b-9571-58d789ca2d38-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.247990 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk4lh\" (UniqueName: \"kubernetes.io/projected/b8cd42ce-4a62-486b-9571-58d789ca2d38-kube-api-access-xk4lh\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.630509 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7qxb9" event={"ID":"b8cd42ce-4a62-486b-9571-58d789ca2d38","Type":"ContainerDied","Data":"6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52"} Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.630878 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be05ab16b17ac589bed2256313d7469b8679adc5a207e3a3668b1acb8265f52" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.630828 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7qxb9" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.633896 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerStarted","Data":"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c"} Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.634108 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:04 crc kubenswrapper[4842]: I0202 07:05:04.667126 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" podStartSLOduration=2.667100699 podStartE2EDuration="2.667100699s" podCreationTimestamp="2026-02-02 07:05:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:04.662559687 +0000 UTC m=+1130.039827639" watchObservedRunningTime="2026-02-02 07:05:04.667100699 +0000 UTC m=+1130.044368651" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.024944 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.047717 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:05 crc kubenswrapper[4842]: E0202 07:05:05.048055 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerName="glance-db-sync" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.048071 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerName="glance-db-sync" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.048240 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" containerName="glance-db-sync" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.048979 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.068115 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164752 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164796 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164844 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164924 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.164999 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.165235 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267050 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267096 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267135 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267155 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267174 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267231 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.267928 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.268008 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.268128 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.268832 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.269026 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.287449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"dnsmasq-dns-56c9bc6f5c-h4x5j\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.367029 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:05 crc kubenswrapper[4842]: W0202 07:05:05.648390 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode793f6a1_ed49_496a_af57_84d696daf728.slice/crio-b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d WatchSource:0}: Error finding container b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d: Status 404 returned error can't find the container with id b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d Feb 02 07:05:05 crc kubenswrapper[4842]: I0202 07:05:05.652328 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.667077 4842 generic.go:334] "Generic (PLEG): container finished" podID="e793f6a1-ed49-496a-af57-84d696daf728" containerID="dca3dac891364e01eb6e12794cb5bb79081189c188f045ba72387b730d26feaa" exitCode=0 Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.667787 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" containerID="cri-o://3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" gracePeriod=10 Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.670560 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerDied","Data":"dca3dac891364e01eb6e12794cb5bb79081189c188f045ba72387b730d26feaa"} Feb 02 07:05:06 crc kubenswrapper[4842]: I0202 07:05:06.670608 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerStarted","Data":"b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.072291 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196174 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196268 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196364 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196467 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.196488 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") pod \"57953a5b-9fe5-49e3-bc39-7ac347467088\" (UID: \"57953a5b-9fe5-49e3-bc39-7ac347467088\") " Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.201719 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr" (OuterVolumeSpecName: "kube-api-access-vw2lr") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "kube-api-access-vw2lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.233115 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.237512 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.239525 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.252885 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config" (OuterVolumeSpecName: "config") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.258172 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57953a5b-9fe5-49e3-bc39-7ac347467088" (UID: "57953a5b-9fe5-49e3-bc39-7ac347467088"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297738 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297775 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297787 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297796 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw2lr\" (UniqueName: \"kubernetes.io/projected/57953a5b-9fe5-49e3-bc39-7ac347467088-kube-api-access-vw2lr\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297806 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.297815 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57953a5b-9fe5-49e3-bc39-7ac347467088-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.676645 4842 generic.go:334] "Generic (PLEG): container finished" podID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" exitCode=0 Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.676696 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerDied","Data":"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.677088 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" event={"ID":"57953a5b-9fe5-49e3-bc39-7ac347467088","Type":"ContainerDied","Data":"45616b816ffed6aadd7c2954b933ac19362083c5815ff3769fd5f6861a68956c"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.677111 4842 scope.go:117] "RemoveContainer" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.676748 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8467b54bcc-fn7dr" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.680326 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerStarted","Data":"b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702"} Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.680991 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.702456 4842 scope.go:117] "RemoveContainer" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.708291 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podStartSLOduration=2.708272653 podStartE2EDuration="2.708272653s" podCreationTimestamp="2026-02-02 07:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:07.705606707 +0000 UTC m=+1133.082874659" watchObservedRunningTime="2026-02-02 07:05:07.708272653 +0000 UTC m=+1133.085540605" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.730273 4842 scope.go:117] "RemoveContainer" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" Feb 02 07:05:07 crc kubenswrapper[4842]: E0202 07:05:07.730822 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c\": container with ID starting with 3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c not found: ID does not exist" containerID="3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.730865 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c"} err="failed to get container status \"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c\": rpc error: code = NotFound desc = could not find container \"3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c\": container with ID starting with 3fad7ed135583a1d0cc10f740da8be24965e39c32bf4bc26461df808806e508c not found: ID does not exist" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.730892 4842 scope.go:117] "RemoveContainer" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" Feb 02 07:05:07 crc kubenswrapper[4842]: E0202 07:05:07.731244 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f\": container with ID starting with e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f not found: ID does not exist" containerID="e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.731265 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f"} err="failed to get container status \"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f\": rpc error: code = NotFound desc = could not find container \"e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f\": container with ID starting with e73747c25e1db56069f9ad6b874f439bb35dd785b3f2fd7919c45acbffd10c5f not found: ID does not exist" Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.732489 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:07 crc kubenswrapper[4842]: I0202 07:05:07.739405 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8467b54bcc-fn7dr"] Feb 02 07:05:09 crc kubenswrapper[4842]: I0202 07:05:09.452347 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" path="/var/lib/kubelet/pods/57953a5b-9fe5-49e3-bc39-7ac347467088/volumes" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.171548 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666043 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:05:14 crc kubenswrapper[4842]: E0202 07:05:14.666602 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666633 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" Feb 02 07:05:14 crc kubenswrapper[4842]: E0202 07:05:14.666657 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="init" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666670 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="init" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.666961 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="57953a5b-9fe5-49e3-bc39-7ac347467088" containerName="dnsmasq-dns" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.667920 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.680696 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.733576 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.733664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.771331 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.773014 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.777661 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.778689 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.779916 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.794769 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.816741 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836184 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836286 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836340 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836406 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836473 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.836512 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.837442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.862678 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"barbican-db-create-8rdwx\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.863479 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.872012 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.873117 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.878252 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.881671 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.937969 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938084 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938143 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938163 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938194 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938323 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.938767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.939331 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.967390 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"barbican-8e42-account-create-update-mtd79\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.968569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"cinder-db-create-hhd7d\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.976335 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.977326 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.989044 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:05:14 crc kubenswrapper[4842]: I0202 07:05:14.989402 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.038164 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.039468 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.040145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.046043 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.046443 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.040240 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047165 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047205 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047641 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.047802 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.048387 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.056817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.071192 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"cinder-716d-account-create-update-ft5kt\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.109728 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.112907 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.113874 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.114202 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.118270 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.123069 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148507 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148561 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148595 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148632 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148654 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148670 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.148716 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.149271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.173060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"neutron-db-create-8p487\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.223111 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253550 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.253949 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.258731 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.271622 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.284587 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.293308 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"neutron-bfdd-account-create-update-rws4k\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.294125 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"keystone-db-sync-z87kx\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.368608 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.376912 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.431588 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.451509 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.458290 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.458568 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.460626 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.463825 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" containerID="cri-o://f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3" gracePeriod=10 Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.774522 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.789696 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerStarted","Data":"a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090"} Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.789755 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerStarted","Data":"28a49c26ed5983df61dd478607c39fd13bcfdd80f726d093a1fa96092771df86"} Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.807430 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.810709 4842 generic.go:334] "Generic (PLEG): container finished" podID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerID="f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3" exitCode=0 Feb 02 07:05:15 crc kubenswrapper[4842]: I0202 07:05:15.810770 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerDied","Data":"f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:15.831847 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-8rdwx" podStartSLOduration=1.83182449 podStartE2EDuration="1.83182449s" podCreationTimestamp="2026-02-02 07:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:15.817262282 +0000 UTC m=+1141.194530204" watchObservedRunningTime="2026-02-02 07:05:15.83182449 +0000 UTC m=+1141.209092402" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:15.865884 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.200255 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:05:16 crc kubenswrapper[4842]: W0202 07:05:16.232274 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1ffaeb5_5dc3_4ead_8b43_701f81a8c965.slice/crio-c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f WatchSource:0}: Error finding container c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f: Status 404 returned error can't find the container with id c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.251909 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.370223 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.392734 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:05:16 crc kubenswrapper[4842]: W0202 07:05:16.481982 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c852e5a_26fe_4905_8483_4619c280f9c0.slice/crio-aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae WatchSource:0}: Error finding container aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae: Status 404 returned error can't find the container with id aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae Feb 02 07:05:16 crc kubenswrapper[4842]: W0202 07:05:16.482400 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc51cea52_ce54_4855_9d4c_97817c4cc6b0.slice/crio-4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437 WatchSource:0}: Error finding container 4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437: Status 404 returned error can't find the container with id 4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.484106 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.488469 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.580614 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.580986 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.581027 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.581143 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.581207 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") pod \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\" (UID: \"f57fef97-6ad3-4b54-9859-2b33853f7f6d\") " Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.593978 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp" (OuterVolumeSpecName: "kube-api-access-5gcnp") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "kube-api-access-5gcnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.640003 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.683483 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gcnp\" (UniqueName: \"kubernetes.io/projected/f57fef97-6ad3-4b54-9859-2b33853f7f6d-kube-api-access-5gcnp\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.702939 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.705699 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config" (OuterVolumeSpecName: "config") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.711178 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.711413 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f57fef97-6ad3-4b54-9859-2b33853f7f6d" (UID: "f57fef97-6ad3-4b54-9859-2b33853f7f6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785069 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785107 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785116 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.785124 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f57fef97-6ad3-4b54-9859-2b33853f7f6d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.825139 4842 generic.go:334] "Generic (PLEG): container finished" podID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerID="17bb3eec7905f7b5df5e9c3137f1a5db8fc820e99f038ef4113064b8ca0bb24d" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.825359 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-ft5kt" event={"ID":"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965","Type":"ContainerDied","Data":"17bb3eec7905f7b5df5e9c3137f1a5db8fc820e99f038ef4113064b8ca0bb24d"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.825471 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-ft5kt" event={"ID":"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965","Type":"ContainerStarted","Data":"c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.826951 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerStarted","Data":"326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.826994 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerStarted","Data":"4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.828297 4842 generic.go:334] "Generic (PLEG): container finished" podID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerID="2b38ab8a50c4bfdef3036052e4dbdb50598c007951f872fa5af56a866e47db58" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.828354 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hhd7d" event={"ID":"27c72b5c-16bb-4404-8c00-6b37ed7d9b47","Type":"ContainerDied","Data":"2b38ab8a50c4bfdef3036052e4dbdb50598c007951f872fa5af56a866e47db58"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.828370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hhd7d" event={"ID":"27c72b5c-16bb-4404-8c00-6b37ed7d9b47","Type":"ContainerStarted","Data":"3d91b23d6d9b6c109112ab4417aa2315357fa56338dce12c560bf3423a87cb00"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.829439 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerStarted","Data":"9c624bec2cfab2b93f6c6a45dcd225604c34747efe7f2303db55b6d98511faf5"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.831031 4842 generic.go:334] "Generic (PLEG): container finished" podID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerID="185ab6e958e5fc2a5da9e833e3789438b8d16f440f7c53e0467e8ff307a5f7c8" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.831101 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-mtd79" event={"ID":"d82484f3-c883-4c12-8ca1-6de8ead67139","Type":"ContainerDied","Data":"185ab6e958e5fc2a5da9e833e3789438b8d16f440f7c53e0467e8ff307a5f7c8"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.831123 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-mtd79" event={"ID":"d82484f3-c883-4c12-8ca1-6de8ead67139","Type":"ContainerStarted","Data":"cfe5692a4b77a70b1e8ebbd97f4ff631dfa1ec5b8e9d15783262873cfb83076b"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.832238 4842 generic.go:334] "Generic (PLEG): container finished" podID="6418a243-5699-42a3-8fab-d65c530c9951" containerID="a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090" exitCode=0 Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.832286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerDied","Data":"a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.833328 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerStarted","Data":"1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.833352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerStarted","Data":"aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.835530 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" event={"ID":"f57fef97-6ad3-4b54-9859-2b33853f7f6d","Type":"ContainerDied","Data":"7707ee54a5265cd6f331b436e56fc1213a27c7e80bff860552b4df87b7cb0473"} Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.835562 4842 scope.go:117] "RemoveContainer" containerID="f0a94a75b63c1a8041b919515cc44d86376bbe513e93d1848bcd51190a1482d3" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.835616 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6cb545bd4c-hqszm" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.852657 4842 scope.go:117] "RemoveContainer" containerID="95945828629b93199fdf9c3ec54c43205bcf2d7c6c586860cf34627eab21e480" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.904644 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-8p487" podStartSLOduration=2.904409001 podStartE2EDuration="2.904409001s" podCreationTimestamp="2026-02-02 07:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:16.895515882 +0000 UTC m=+1142.272783794" watchObservedRunningTime="2026-02-02 07:05:16.904409001 +0000 UTC m=+1142.281676913" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.921650 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bfdd-account-create-update-rws4k" podStartSLOduration=1.921634755 podStartE2EDuration="1.921634755s" podCreationTimestamp="2026-02-02 07:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:16.913624098 +0000 UTC m=+1142.290892000" watchObservedRunningTime="2026-02-02 07:05:16.921634755 +0000 UTC m=+1142.298902667" Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.950749 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:05:16 crc kubenswrapper[4842]: I0202 07:05:16.957097 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6cb545bd4c-hqszm"] Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.446925 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" path="/var/lib/kubelet/pods/f57fef97-6ad3-4b54-9859-2b33853f7f6d/volumes" Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.845152 4842 generic.go:334] "Generic (PLEG): container finished" podID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerID="1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f" exitCode=0 Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.845234 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerDied","Data":"1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f"} Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.849850 4842 generic.go:334] "Generic (PLEG): container finished" podID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerID="326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0" exitCode=0 Feb 02 07:05:17 crc kubenswrapper[4842]: I0202 07:05:17.849989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerDied","Data":"326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.361423 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.366662 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.371646 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.383697 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508027 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") pod \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") pod \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\" (UID: \"27c72b5c-16bb-4404-8c00-6b37ed7d9b47\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508138 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") pod \"6418a243-5699-42a3-8fab-d65c530c9951\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508159 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") pod \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508176 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") pod \"6418a243-5699-42a3-8fab-d65c530c9951\" (UID: \"6418a243-5699-42a3-8fab-d65c530c9951\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508230 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") pod \"d82484f3-c883-4c12-8ca1-6de8ead67139\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508250 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") pod \"d82484f3-c883-4c12-8ca1-6de8ead67139\" (UID: \"d82484f3-c883-4c12-8ca1-6de8ead67139\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.508369 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") pod \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\" (UID: \"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965\") " Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.509501 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" (UID: "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.510038 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6418a243-5699-42a3-8fab-d65c530c9951" (UID: "6418a243-5699-42a3-8fab-d65c530c9951"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.510082 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d82484f3-c883-4c12-8ca1-6de8ead67139" (UID: "d82484f3-c883-4c12-8ca1-6de8ead67139"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.510128 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27c72b5c-16bb-4404-8c00-6b37ed7d9b47" (UID: "27c72b5c-16bb-4404-8c00-6b37ed7d9b47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.514763 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf" (OuterVolumeSpecName: "kube-api-access-rs2cf") pod "d82484f3-c883-4c12-8ca1-6de8ead67139" (UID: "d82484f3-c883-4c12-8ca1-6de8ead67139"). InnerVolumeSpecName "kube-api-access-rs2cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.514961 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl" (OuterVolumeSpecName: "kube-api-access-rmqkl") pod "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" (UID: "f1ffaeb5-5dc3-4ead-8b43-701f81a8c965"). InnerVolumeSpecName "kube-api-access-rmqkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.516620 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz" (OuterVolumeSpecName: "kube-api-access-dhggz") pod "27c72b5c-16bb-4404-8c00-6b37ed7d9b47" (UID: "27c72b5c-16bb-4404-8c00-6b37ed7d9b47"). InnerVolumeSpecName "kube-api-access-dhggz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.519051 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v" (OuterVolumeSpecName: "kube-api-access-bwr6v") pod "6418a243-5699-42a3-8fab-d65c530c9951" (UID: "6418a243-5699-42a3-8fab-d65c530c9951"). InnerVolumeSpecName "kube-api-access-bwr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.609882 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610039 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610051 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhggz\" (UniqueName: \"kubernetes.io/projected/27c72b5c-16bb-4404-8c00-6b37ed7d9b47-kube-api-access-dhggz\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610063 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwr6v\" (UniqueName: \"kubernetes.io/projected/6418a243-5699-42a3-8fab-d65c530c9951-kube-api-access-bwr6v\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610071 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6418a243-5699-42a3-8fab-d65c530c9951-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610080 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqkl\" (UniqueName: \"kubernetes.io/projected/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965-kube-api-access-rmqkl\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610090 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d82484f3-c883-4c12-8ca1-6de8ead67139-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.610099 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs2cf\" (UniqueName: \"kubernetes.io/projected/d82484f3-c883-4c12-8ca1-6de8ead67139-kube-api-access-rs2cf\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.903369 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-hhd7d" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.903359 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-hhd7d" event={"ID":"27c72b5c-16bb-4404-8c00-6b37ed7d9b47","Type":"ContainerDied","Data":"3d91b23d6d9b6c109112ab4417aa2315357fa56338dce12c560bf3423a87cb00"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.903501 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d91b23d6d9b6c109112ab4417aa2315357fa56338dce12c560bf3423a87cb00" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.906746 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-mtd79" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.906810 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-mtd79" event={"ID":"d82484f3-c883-4c12-8ca1-6de8ead67139","Type":"ContainerDied","Data":"cfe5692a4b77a70b1e8ebbd97f4ff631dfa1ec5b8e9d15783262873cfb83076b"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.906930 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe5692a4b77a70b1e8ebbd97f4ff631dfa1ec5b8e9d15783262873cfb83076b" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.908234 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8rdwx" event={"ID":"6418a243-5699-42a3-8fab-d65c530c9951","Type":"ContainerDied","Data":"28a49c26ed5983df61dd478607c39fd13bcfdd80f726d093a1fa96092771df86"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.908272 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a49c26ed5983df61dd478607c39fd13bcfdd80f726d093a1fa96092771df86" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.908352 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8rdwx" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.909905 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-ft5kt" event={"ID":"f1ffaeb5-5dc3-4ead-8b43-701f81a8c965","Type":"ContainerDied","Data":"c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f"} Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.909969 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c701c71404ce89e1ea6b0999f0d53d4e8eb458f082afccd142d6f68dc34c401f" Feb 02 07:05:18 crc kubenswrapper[4842]: I0202 07:05:18.910044 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-ft5kt" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.281195 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.290096 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.470785 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") pod \"9c852e5a-26fe-4905-8483-4619c280f9c0\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.470869 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") pod \"9c852e5a-26fe-4905-8483-4619c280f9c0\" (UID: \"9c852e5a-26fe-4905-8483-4619c280f9c0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.470928 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") pod \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.471083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") pod \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\" (UID: \"c51cea52-ce54-4855-9d4c-97817c4cc6b0\") " Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.471580 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c852e5a-26fe-4905-8483-4619c280f9c0" (UID: "9c852e5a-26fe-4905-8483-4619c280f9c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.471861 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c51cea52-ce54-4855-9d4c-97817c4cc6b0" (UID: "c51cea52-ce54-4855-9d4c-97817c4cc6b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.476648 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84" (OuterVolumeSpecName: "kube-api-access-mpj84") pod "9c852e5a-26fe-4905-8483-4619c280f9c0" (UID: "9c852e5a-26fe-4905-8483-4619c280f9c0"). InnerVolumeSpecName "kube-api-access-mpj84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.477553 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75" (OuterVolumeSpecName: "kube-api-access-jfv75") pod "c51cea52-ce54-4855-9d4c-97817c4cc6b0" (UID: "c51cea52-ce54-4855-9d4c-97817c4cc6b0"). InnerVolumeSpecName "kube-api-access-jfv75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574748 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c51cea52-ce54-4855-9d4c-97817c4cc6b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574814 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c852e5a-26fe-4905-8483-4619c280f9c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574840 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpj84\" (UniqueName: \"kubernetes.io/projected/9c852e5a-26fe-4905-8483-4619c280f9c0-kube-api-access-mpj84\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.574866 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfv75\" (UniqueName: \"kubernetes.io/projected/c51cea52-ce54-4855-9d4c-97817c4cc6b0-kube-api-access-jfv75\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.951673 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-rws4k" event={"ID":"c51cea52-ce54-4855-9d4c-97817c4cc6b0","Type":"ContainerDied","Data":"4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437"} Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.952072 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4007cf1b199fca1bb0e11ca4dcb702cf826f20a774a65d870161c3df8f2c9437" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.951712 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-rws4k" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.955268 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerStarted","Data":"9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f"} Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.957899 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8p487" event={"ID":"9c852e5a-26fe-4905-8483-4619c280f9c0","Type":"ContainerDied","Data":"aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae"} Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.957972 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa5e8244172d22df4dd0e5e74f8d5b534773098b946d823ca2d7f01ebe48feae" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.958058 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8p487" Feb 02 07:05:22 crc kubenswrapper[4842]: I0202 07:05:22.985620 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-z87kx" podStartSLOduration=2.451864836 podStartE2EDuration="7.98559588s" podCreationTimestamp="2026-02-02 07:05:15 +0000 UTC" firstStartedPulling="2026-02-02 07:05:16.666315106 +0000 UTC m=+1142.043583038" lastFinishedPulling="2026-02-02 07:05:22.20004617 +0000 UTC m=+1147.577314082" observedRunningTime="2026-02-02 07:05:22.97625568 +0000 UTC m=+1148.353523602" watchObservedRunningTime="2026-02-02 07:05:22.98559588 +0000 UTC m=+1148.362863802" Feb 02 07:05:25 crc kubenswrapper[4842]: I0202 07:05:25.993554 4842 generic.go:334] "Generic (PLEG): container finished" podID="3b89146d-a545-4525-8744-723e0d9248b5" containerID="9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f" exitCode=0 Feb 02 07:05:25 crc kubenswrapper[4842]: I0202 07:05:25.993644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerDied","Data":"9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f"} Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.427303 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.484909 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") pod \"3b89146d-a545-4525-8744-723e0d9248b5\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.484994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") pod \"3b89146d-a545-4525-8744-723e0d9248b5\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.485073 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") pod \"3b89146d-a545-4525-8744-723e0d9248b5\" (UID: \"3b89146d-a545-4525-8744-723e0d9248b5\") " Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.502040 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4" (OuterVolumeSpecName: "kube-api-access-xpxv4") pod "3b89146d-a545-4525-8744-723e0d9248b5" (UID: "3b89146d-a545-4525-8744-723e0d9248b5"). InnerVolumeSpecName "kube-api-access-xpxv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.527808 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b89146d-a545-4525-8744-723e0d9248b5" (UID: "3b89146d-a545-4525-8744-723e0d9248b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.528611 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data" (OuterVolumeSpecName: "config-data") pod "3b89146d-a545-4525-8744-723e0d9248b5" (UID: "3b89146d-a545-4525-8744-723e0d9248b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.588166 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.588248 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpxv4\" (UniqueName: \"kubernetes.io/projected/3b89146d-a545-4525-8744-723e0d9248b5-kube-api-access-xpxv4\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:27 crc kubenswrapper[4842]: I0202 07:05:27.588273 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b89146d-a545-4525-8744-723e0d9248b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.018051 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-z87kx" event={"ID":"3b89146d-a545-4525-8744-723e0d9248b5","Type":"ContainerDied","Data":"9c624bec2cfab2b93f6c6a45dcd225604c34747efe7f2303db55b6d98511faf5"} Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.018110 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c624bec2cfab2b93f6c6a45dcd225604c34747efe7f2303db55b6d98511faf5" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.018177 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-z87kx" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.322711 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323189 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6418a243-5699-42a3-8fab-d65c530c9951" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323294 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6418a243-5699-42a3-8fab-d65c530c9951" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323351 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323395 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323450 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323493 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323543 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323586 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323633 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323680 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323740 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b89146d-a545-4525-8744-723e0d9248b5" containerName="keystone-db-sync" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323784 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b89146d-a545-4525-8744-723e0d9248b5" containerName="keystone-db-sync" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323834 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="init" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323877 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="init" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.323932 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.323979 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: E0202 07:05:28.324032 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.324129 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326432 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326532 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6418a243-5699-42a3-8fab-d65c530c9951" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326591 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" containerName="mariadb-database-create" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326647 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326697 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b89146d-a545-4525-8744-723e0d9248b5" containerName="keystone-db-sync" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326742 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326792 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57fef97-6ad3-4b54-9859-2b33853f7f6d" containerName="dnsmasq-dns" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.326845 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" containerName="mariadb-account-create-update" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.327683 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.346090 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.347601 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.352679 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.352799 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.352878 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.353068 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.353138 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.353241 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.381275 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400319 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400364 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400392 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400447 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400470 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400491 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400511 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400536 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400578 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400624 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400664 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.400686 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502643 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502686 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502742 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.502776 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.503727 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.503750 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.504457 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.504572 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.504703 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505085 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505156 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505274 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505402 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.506159 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.506577 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.505875 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.506124 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508399 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508447 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508603 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.508866 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.509836 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.559200 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.560154 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.565546 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"dnsmasq-dns-54b4bb76d5-t96rz\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.569635 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.571318 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.571486 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fr64b" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.587026 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"keystone-bootstrap-r6tjh\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609196 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609259 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609285 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609411 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.609452 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.634283 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.644298 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.648368 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.649272 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.667934 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qlr5t" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.668128 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.669736 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.670837 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.672557 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710406 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710453 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710488 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710530 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710546 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710565 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710613 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710644 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.710769 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.714814 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.721408 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.722842 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.725363 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.755183 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"cinder-db-sync-phj68\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.795063 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.796392 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.801396 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.802131 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-drtzj" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813119 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813174 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813223 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813276 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813341 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.813403 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.819960 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.826031 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.835032 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.849073 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.856311 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.858355 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.864171 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"neutron-db-sync-rpkx6\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.883517 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.883714 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.902016 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916674 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916763 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916783 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916810 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916829 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916864 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916886 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916922 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916976 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.916996 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.923035 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.981960 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.924190 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.990079 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:28 crc kubenswrapper[4842]: I0202 07:05:28.999228 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"barbican-db-sync-sjstk\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.010290 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.031410 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035119 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035311 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035349 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035538 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035587 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035626 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.035696 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.036798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.038561 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039160 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039216 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rf5dt" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039552 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.039668 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.043905 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.045559 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.045769 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.051910 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.052205 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.058050 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"ceilometer-0\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.061094 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.139698 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.139774 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140171 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140212 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140853 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140887 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140928 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140968 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.140997 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.141013 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.150680 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.214676 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242242 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242363 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242391 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242412 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.242427 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243137 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243171 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243189 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243224 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243315 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.243335 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244139 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244478 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244702 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.244801 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.245637 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.246229 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.248900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.251089 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.260844 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"placement-db-sync-2ddsf\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.262113 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"dnsmasq-dns-5dc4fcdbc-b8t4s\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.320196 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.361485 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.365143 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.459944 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.481783 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.483540 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.491955 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.492012 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpq5h" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.492170 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.492229 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.511922 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.519009 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.520397 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.522315 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.522597 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.527008 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.625658 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:05:29 crc kubenswrapper[4842]: W0202 07:05:29.635692 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc49955b5_5145_4939_91e5_280569e18a33.slice/crio-08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec WatchSource:0}: Error finding container 08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec: Status 404 returned error can't find the container with id 08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.636390 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:05:29 crc kubenswrapper[4842]: W0202 07:05:29.644811 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9f1c72e_953b_45ba_ba69_c7574f82e8ad.slice/crio-e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704 WatchSource:0}: Error finding container e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704: Status 404 returned error can't find the container with id e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704 Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648116 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648151 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648169 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648186 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648208 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648236 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648255 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648289 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648302 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648344 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648363 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648396 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648420 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648448 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648493 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.648509 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749645 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749665 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749685 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749712 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749730 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749750 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749764 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749787 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749804 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749844 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749898 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749953 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.749971 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.750002 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.750178 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.750607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.751562 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.751598 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.751624 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.752159 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.753675 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.754912 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.755428 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.763273 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.767377 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.773971 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.777307 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.778294 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.778917 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.806289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.827181 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.833249 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.836205 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.840500 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.889607 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:05:29 crc kubenswrapper[4842]: W0202 07:05:29.995762 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfff8a308_89ab_409f_9053_6a363794df83.slice/crio-7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33 WatchSource:0}: Error finding container 7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33: Status 404 returned error can't find the container with id 7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33 Feb 02 07:05:29 crc kubenswrapper[4842]: I0202 07:05:29.996014 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.066085 4842 generic.go:334] "Generic (PLEG): container finished" podID="7451d324-f6ed-4ad3-aacb-875192778c83" containerID="ada27da3da689853a4b7facfad88a4f4ff5e03c7c2e70f234e5841ce1d04d4c9" exitCode=0 Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.066162 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" event={"ID":"7451d324-f6ed-4ad3-aacb-875192778c83","Type":"ContainerDied","Data":"ada27da3da689853a4b7facfad88a4f4ff5e03c7c2e70f234e5841ce1d04d4c9"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.066188 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" event={"ID":"7451d324-f6ed-4ad3-aacb-875192778c83","Type":"ContainerStarted","Data":"2a917ef164d764f672ca6277247b623a831a1cc93b3d32269b491951233d1ed8"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.068637 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerStarted","Data":"cd2d0997e2cc127c80bb06f907a598f4209b55d656a3634a4391e4cc9d674026"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.070858 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerStarted","Data":"e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.070898 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerStarted","Data":"08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.074508 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"7ea6f3db6a36a7dee937382b0699d18f0905deeb5700b93c12a3f06c02d6628f"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.074615 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.091631 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.093346 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerStarted","Data":"7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.107513 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerStarted","Data":"e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.112716 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerStarted","Data":"3bf1c02d1eb4a6fd6bfb8e0d7089ca1be72bb9eccd12b09bde66e78b797862a2"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.119730 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rpkx6" podStartSLOduration=2.119705626 podStartE2EDuration="2.119705626s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:30.117587384 +0000 UTC m=+1155.494855286" watchObservedRunningTime="2026-02-02 07:05:30.119705626 +0000 UTC m=+1155.496973538" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.121313 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerStarted","Data":"7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.121362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerStarted","Data":"fd34e55492114d1dc15256d5270c613a7bb387100ffe277e3f9d66d6fd42c42e"} Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.205140 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r6tjh" podStartSLOduration=2.20510737 podStartE2EDuration="2.20510737s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:30.202483436 +0000 UTC m=+1155.579751348" watchObservedRunningTime="2026-02-02 07:05:30.20510737 +0000 UTC m=+1155.582375282" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.460197 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566330 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566450 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566507 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566599 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566614 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.566631 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") pod \"7451d324-f6ed-4ad3-aacb-875192778c83\" (UID: \"7451d324-f6ed-4ad3-aacb-875192778c83\") " Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.574018 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz" (OuterVolumeSpecName: "kube-api-access-4j4fz") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "kube-api-access-4j4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.602089 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.602110 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.620058 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.620722 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config" (OuterVolumeSpecName: "config") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.627374 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7451d324-f6ed-4ad3-aacb-875192778c83" (UID: "7451d324-f6ed-4ad3-aacb-875192778c83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670615 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j4fz\" (UniqueName: \"kubernetes.io/projected/7451d324-f6ed-4ad3-aacb-875192778c83-kube-api-access-4j4fz\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670858 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670871 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670880 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670892 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.670900 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7451d324-f6ed-4ad3-aacb-875192778c83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:30 crc kubenswrapper[4842]: I0202 07:05:30.748771 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.159071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerStarted","Data":"8c23fbb0fff0a16501dd8fc713b53a51e1c6260cd6f5e5446454a32930538b9a"} Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.171651 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.172740 4842 generic.go:334] "Generic (PLEG): container finished" podID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerID="b65de85796493b7fd1d1b4d84ddbf8a0d1cb6cbceca0fba243ff835d64eb5002" exitCode=0 Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.172833 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerDied","Data":"b65de85796493b7fd1d1b4d84ddbf8a0d1cb6cbceca0fba243ff835d64eb5002"} Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.197124 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.197668 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b4bb76d5-t96rz" event={"ID":"7451d324-f6ed-4ad3-aacb-875192778c83","Type":"ContainerDied","Data":"2a917ef164d764f672ca6277247b623a831a1cc93b3d32269b491951233d1ed8"} Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.197699 4842 scope.go:117] "RemoveContainer" containerID="ada27da3da689853a4b7facfad88a4f4ff5e03c7c2e70f234e5841ce1d04d4c9" Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.290634 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.300014 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.330578 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.361628 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b4bb76d5-t96rz"] Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.451795 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" path="/var/lib/kubelet/pods/7451d324-f6ed-4ad3-aacb-875192778c83/volumes" Feb 02 07:05:31 crc kubenswrapper[4842]: I0202 07:05:31.731869 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:31 crc kubenswrapper[4842]: W0202 07:05:31.753483 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfcf9b2_c06f_457c_a13c_b3dd8399eb89.slice/crio-75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66 WatchSource:0}: Error finding container 75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66: Status 404 returned error can't find the container with id 75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66 Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.229944 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerStarted","Data":"ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9"} Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.231918 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerStarted","Data":"75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66"} Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.237875 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerStarted","Data":"070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17"} Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.238127 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:32 crc kubenswrapper[4842]: I0202 07:05:32.267638 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" podStartSLOduration=4.267600126 podStartE2EDuration="4.267600126s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:32.260852139 +0000 UTC m=+1157.638120061" watchObservedRunningTime="2026-02-02 07:05:32.267600126 +0000 UTC m=+1157.644868038" Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.253320 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerStarted","Data":"2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda"} Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.253774 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" containerID="cri-o://ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.253884 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" containerID="cri-o://2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257843 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerStarted","Data":"c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d"} Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257887 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerStarted","Data":"d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247"} Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257944 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" containerID="cri-o://d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.257947 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" containerID="cri-o://c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d" gracePeriod=30 Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.277327 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.277308258 podStartE2EDuration="5.277308258s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:33.27492735 +0000 UTC m=+1158.652195262" watchObservedRunningTime="2026-02-02 07:05:33.277308258 +0000 UTC m=+1158.654576170" Feb 02 07:05:33 crc kubenswrapper[4842]: I0202 07:05:33.359131 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.359113114 podStartE2EDuration="5.359113114s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:33.350996314 +0000 UTC m=+1158.728264226" watchObservedRunningTime="2026-02-02 07:05:33.359113114 +0000 UTC m=+1158.736381026" Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.273172 4842 generic.go:334] "Generic (PLEG): container finished" podID="0083ea44-21b0-492b-971b-671241ff8abc" containerID="2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda" exitCode=0 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.274523 4842 generic.go:334] "Generic (PLEG): container finished" podID="0083ea44-21b0-492b-971b-671241ff8abc" containerID="ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9" exitCode=143 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.273264 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerDied","Data":"2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.274684 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerDied","Data":"ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278540 4842 generic.go:334] "Generic (PLEG): container finished" podID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerID="c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d" exitCode=143 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278565 4842 generic.go:334] "Generic (PLEG): container finished" podID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerID="d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247" exitCode=143 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278572 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerDied","Data":"c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.278623 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerDied","Data":"d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247"} Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.280556 4842 generic.go:334] "Generic (PLEG): container finished" podID="34848244-9de8-4950-8a9a-7e571c3104c9" containerID="7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42" exitCode=0 Feb 02 07:05:34 crc kubenswrapper[4842]: I0202 07:05:34.280621 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerDied","Data":"7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42"} Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.270817 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.322358 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.344161 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6tjh" event={"ID":"34848244-9de8-4950-8a9a-7e571c3104c9","Type":"ContainerDied","Data":"fd34e55492114d1dc15256d5270c613a7bb387100ffe277e3f9d66d6fd42c42e"} Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.344204 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd34e55492114d1dc15256d5270c613a7bb387100ffe277e3f9d66d6fd42c42e" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.344286 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6tjh" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.374992 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375313 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375547 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375707 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.375845 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") pod \"34848244-9de8-4950-8a9a-7e571c3104c9\" (UID: \"34848244-9de8-4950-8a9a-7e571c3104c9\") " Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.385810 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.395263 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6" (OuterVolumeSpecName: "kube-api-access-29cd6") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "kube-api-access-29cd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.399524 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.417653 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts" (OuterVolumeSpecName: "scripts") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.470459 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486171 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29cd6\" (UniqueName: \"kubernetes.io/projected/34848244-9de8-4950-8a9a-7e571c3104c9-kube-api-access-29cd6\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486202 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486214 4842 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486240 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.486250 4842 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.497471 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data" (OuterVolumeSpecName: "config-data") pod "34848244-9de8-4950-8a9a-7e571c3104c9" (UID: "34848244-9de8-4950-8a9a-7e571c3104c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.535684 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.535958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" containerID="cri-o://b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702" gracePeriod=10 Feb 02 07:05:39 crc kubenswrapper[4842]: I0202 07:05:39.587860 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34848244-9de8-4950-8a9a-7e571c3104c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.357110 4842 generic.go:334] "Generic (PLEG): container finished" podID="e793f6a1-ed49-496a-af57-84d696daf728" containerID="b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702" exitCode=0 Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.357163 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerDied","Data":"b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702"} Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.367760 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.461486 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.492588 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r6tjh"] Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.498989 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:05:40 crc kubenswrapper[4842]: E0202 07:05:40.499346 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" containerName="keystone-bootstrap" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499361 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" containerName="keystone-bootstrap" Feb 02 07:05:40 crc kubenswrapper[4842]: E0202 07:05:40.499378 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" containerName="init" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499385 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" containerName="init" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499560 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7451d324-f6ed-4ad3-aacb-875192778c83" containerName="init" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.499578 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" containerName="keystone-bootstrap" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.500136 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.506951 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507406 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507572 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507592 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.507720 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.519788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614127 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614175 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614193 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614258 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.614298 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715610 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715654 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715676 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715731 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715749 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.715769 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.719927 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.720030 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.720433 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.729877 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.730040 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.732292 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"keystone-bootstrap-xh7mg\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:40 crc kubenswrapper[4842]: I0202 07:05:40.835334 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:05:41 crc kubenswrapper[4842]: I0202 07:05:41.450064 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34848244-9de8-4950-8a9a-7e571c3104c9" path="/var/lib/kubelet/pods/34848244-9de8-4950-8a9a-7e571c3104c9/volumes" Feb 02 07:05:45 crc kubenswrapper[4842]: I0202 07:05:45.367832 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 02 07:05:47 crc kubenswrapper[4842]: I0202 07:05:47.419237 4842 generic.go:334] "Generic (PLEG): container finished" podID="c49955b5-5145-4939-91e5-280569e18a33" containerID="e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e" exitCode=0 Feb 02 07:05:47 crc kubenswrapper[4842]: I0202 07:05:47.419267 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerDied","Data":"e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e"} Feb 02 07:05:50 crc kubenswrapper[4842]: I0202 07:05:50.368431 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 02 07:05:50 crc kubenswrapper[4842]: I0202 07:05:50.368962 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.522876 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.535099 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.557313 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647008 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647053 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") pod \"c49955b5-5145-4939-91e5-280569e18a33\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647110 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647152 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647200 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647240 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647267 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647348 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647388 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647409 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647425 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647443 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647462 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647485 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647503 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") pod \"0083ea44-21b0-492b-971b-671241ff8abc\" (UID: \"0083ea44-21b0-492b-971b-671241ff8abc\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647523 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") pod \"c49955b5-5145-4939-91e5-280569e18a33\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647572 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") pod \"c49955b5-5145-4939-91e5-280569e18a33\" (UID: \"c49955b5-5145-4939-91e5-280569e18a33\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.647802 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.648191 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.654795 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts" (OuterVolumeSpecName: "scripts") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.656846 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts" (OuterVolumeSpecName: "scripts") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.657074 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs" (OuterVolumeSpecName: "logs") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.660234 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg" (OuterVolumeSpecName: "kube-api-access-dn2sg") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "kube-api-access-dn2sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.660695 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs" (OuterVolumeSpecName: "logs") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.663166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt" (OuterVolumeSpecName: "kube-api-access-86gvt") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "kube-api-access-86gvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.666169 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.667922 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng" (OuterVolumeSpecName: "kube-api-access-4xhng") pod "c49955b5-5145-4939-91e5-280569e18a33" (UID: "c49955b5-5145-4939-91e5-280569e18a33"). InnerVolumeSpecName "kube-api-access-4xhng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.673759 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.705071 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.711828 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c49955b5-5145-4939-91e5-280569e18a33" (UID: "c49955b5-5145-4939-91e5-280569e18a33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.733688 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config" (OuterVolumeSpecName: "config") pod "c49955b5-5145-4939-91e5-280569e18a33" (UID: "c49955b5-5145-4939-91e5-280569e18a33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750127 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750236 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") pod \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\" (UID: \"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89\") " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750972 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.750990 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751000 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751020 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751029 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0083ea44-21b0-492b-971b-671241ff8abc-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751038 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751045 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751059 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751068 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86gvt\" (UniqueName: \"kubernetes.io/projected/0083ea44-21b0-492b-971b-671241ff8abc-kube-api-access-86gvt\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751077 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xhng\" (UniqueName: \"kubernetes.io/projected/c49955b5-5145-4939-91e5-280569e18a33-kube-api-access-4xhng\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751085 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c49955b5-5145-4939-91e5-280569e18a33-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751093 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn2sg\" (UniqueName: \"kubernetes.io/projected/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-kube-api-access-dn2sg\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: W0202 07:05:51.751158 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89/volumes/kubernetes.io~secret/combined-ca-bundle Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.751167 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.753820 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.760397 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.777656 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.779207 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data" (OuterVolumeSpecName: "config-data") pod "0083ea44-21b0-492b-971b-671241ff8abc" (UID: "0083ea44-21b0-492b-971b-671241ff8abc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.784207 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.786771 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data" (OuterVolumeSpecName: "config-data") pod "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" (UID: "bbfcf9b2-c06f-457c-a13c-b3dd8399eb89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.808169 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852396 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852427 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852439 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852449 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852457 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852465 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852473 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0083ea44-21b0-492b-971b-671241ff8abc-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.852480 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.951811 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0083ea44-21b0-492b-971b-671241ff8abc","Type":"ContainerDied","Data":"8c23fbb0fff0a16501dd8fc713b53a51e1c6260cd6f5e5446454a32930538b9a"} Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.951859 4842 scope.go:117] "RemoveContainer" containerID="2fd96f80d20d678e2e8cc672e30a0503d912638602ef248f0350d2eed7a5acda" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.951959 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.964362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rpkx6" event={"ID":"c49955b5-5145-4939-91e5-280569e18a33","Type":"ContainerDied","Data":"08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec"} Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.964394 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a767625ea93aec62299911058dda75d17c5c29e2b78dca21a6a44b37d4a3ec" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.964453 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rpkx6" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.967664 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bbfcf9b2-c06f-457c-a13c-b3dd8399eb89","Type":"ContainerDied","Data":"75b912a951245e0e56c8a52eef30076143aeede0c081fb4651fe4e34d2509d66"} Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.967729 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:51 crc kubenswrapper[4842]: I0202 07:05:51.983124 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.011522 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048322 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048739 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048757 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048779 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048785 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048799 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048805 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048820 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048826 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: E0202 07:05:52.048832 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c49955b5-5145-4939-91e5-280569e18a33" containerName="neutron-db-sync" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048840 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c49955b5-5145-4939-91e5-280569e18a33" containerName="neutron-db-sync" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.048989 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049005 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-log" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049016 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c49955b5-5145-4939-91e5-280569e18a33" containerName="neutron-db-sync" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049027 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0083ea44-21b0-492b-971b-671241ff8abc" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.049036 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" containerName="glance-httpd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.050008 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.054084 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055531 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055645 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-fpq5h" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055760 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.055952 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.060299 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.066753 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.072724 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.074205 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.076238 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.077452 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.078408 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164143 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164242 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164280 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164304 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164485 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164532 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164559 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164616 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164636 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164717 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164769 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164853 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164912 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.164931 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.165000 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.165305 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267730 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267790 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267809 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267838 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267858 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.267980 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268080 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268130 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268158 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268172 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268203 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268248 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.268583 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.269024 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.269098 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.269816 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.270292 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.277590 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.277899 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.278797 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.280101 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.295305 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.296071 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.296242 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.298472 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.300725 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.305358 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.334259 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.344158 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.369679 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.387578 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.814443 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.816351 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.833472 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881346 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881473 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881523 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.881586 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.882012 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.882359 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.950879 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.952557 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958130 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958451 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qlr5t" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958682 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.958773 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.963862 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983618 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983686 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983748 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983802 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983829 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.983851 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984635 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984736 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984855 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984893 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:52 crc kubenswrapper[4842]: I0202 07:05:52.984642 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.002157 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"dnsmasq-dns-6b9c8b59c-jsqpk\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085285 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085355 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085385 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085632 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.085855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.139576 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187555 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187604 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187675 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.187732 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.192717 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.194321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.198456 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.201083 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.203338 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"neutron-7b469b995b-npwfd\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.275671 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:53 crc kubenswrapper[4842]: E0202 07:05:53.284108 4842 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Feb 02 07:05:53 crc kubenswrapper[4842]: E0202 07:05:53.284339 4842 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4nz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-phj68_openstack(d9f1c72e-953b-45ba-ba69-c7574f82e8ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 02 07:05:53 crc kubenswrapper[4842]: E0202 07:05:53.285554 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-phj68" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.297486 4842 scope.go:117] "RemoveContainer" containerID="ccde2cd433c74600bcdce93601254d9511293f06a63ab6132e87513d3754c1e9" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.357303 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.400927 4842 scope.go:117] "RemoveContainer" containerID="c5982122d3335d8f8af9afed233b6885e136dd6acfc9481bba66caad8b099e8d" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.450650 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0083ea44-21b0-492b-971b-671241ff8abc" path="/var/lib/kubelet/pods/0083ea44-21b0-492b-971b-671241ff8abc/volumes" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.452590 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfcf9b2-c06f-457c-a13c-b3dd8399eb89" path="/var/lib/kubelet/pods/bbfcf9b2-c06f-457c-a13c-b3dd8399eb89/volumes" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.469566 4842 scope.go:117] "RemoveContainer" containerID="d2517508f58a8b7c4c13459a97cc7ab9e10a897e173d407ff1912286e20ae247" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493475 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493540 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493717 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493790 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.493811 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.505166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2" (OuterVolumeSpecName: "kube-api-access-2dmt2") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "kube-api-access-2dmt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.553344 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.567586 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.594668 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") pod \"e793f6a1-ed49-496a-af57-84d696daf728\" (UID: \"e793f6a1-ed49-496a-af57-84d696daf728\") " Feb 02 07:05:53 crc kubenswrapper[4842]: W0202 07:05:53.595308 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e793f6a1-ed49-496a-af57-84d696daf728/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595343 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595788 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595804 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dmt2\" (UniqueName: \"kubernetes.io/projected/e793f6a1-ed49-496a-af57-84d696daf728-kube-api-access-2dmt2\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595817 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.595826 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.596384 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.606447 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config" (OuterVolumeSpecName: "config") pod "e793f6a1-ed49-496a-af57-84d696daf728" (UID: "e793f6a1-ed49-496a-af57-84d696daf728"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.697345 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.697372 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e793f6a1-ed49-496a-af57-84d696daf728-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.710802 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.860738 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:05:53 crc kubenswrapper[4842]: W0202 07:05:53.959868 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09febcea_8bf3_43b8_b6ff_ae8a0e445519.slice/crio-a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca WatchSource:0}: Error finding container a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca: Status 404 returned error can't find the container with id a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca Feb 02 07:05:53 crc kubenswrapper[4842]: I0202 07:05:53.960304 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.001330 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerStarted","Data":"a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.004365 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerStarted","Data":"c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.007363 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.020992 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-sjstk" podStartSLOduration=2.606365615 podStartE2EDuration="26.02097611s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.83243625 +0000 UTC m=+1155.209704162" lastFinishedPulling="2026-02-02 07:05:53.247046745 +0000 UTC m=+1178.624314657" observedRunningTime="2026-02-02 07:05:54.018242283 +0000 UTC m=+1179.395510205" watchObservedRunningTime="2026-02-02 07:05:54.02097611 +0000 UTC m=+1179.398244022" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.022803 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerStarted","Data":"1c28118337b87470e336f30ccbda4bc135a7ba7f7cf6293ce8d7b2e21bac07df"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.032153 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerStarted","Data":"cce78954b1aa2e246ca2d16f8b3a27b68612df254d83dcbe0635ca9b3466aaa0"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.035656 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerStarted","Data":"5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.072122 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.073286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56c9bc6f5c-h4x5j" event={"ID":"e793f6a1-ed49-496a-af57-84d696daf728","Type":"ContainerDied","Data":"b3ac1bf771ea13c21ef3016b99265dd8b3157a19cb4d0bcd95a7fc3cee59344d"} Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.073344 4842 scope.go:117] "RemoveContainer" containerID="b3a7c436e2e8d2b98b1b382d46734ec10fcb3fb8ee566aaba25f0dda55dc5702" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.077649 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2ddsf" podStartSLOduration=2.781064787 podStartE2EDuration="26.077628395s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.999822383 +0000 UTC m=+1155.377090295" lastFinishedPulling="2026-02-02 07:05:53.296385991 +0000 UTC m=+1178.673653903" observedRunningTime="2026-02-02 07:05:54.052330212 +0000 UTC m=+1179.429598124" watchObservedRunningTime="2026-02-02 07:05:54.077628395 +0000 UTC m=+1179.454896307" Feb 02 07:05:54 crc kubenswrapper[4842]: E0202 07:05:54.082190 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-phj68" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.092441 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:05:54 crc kubenswrapper[4842]: W0202 07:05:54.181819 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74fb1197_2202_4b15_a858_05dd736a1a26.slice/crio-c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d WatchSource:0}: Error finding container c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d: Status 404 returned error can't find the container with id c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.195298 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.209487 4842 scope.go:117] "RemoveContainer" containerID="dca3dac891364e01eb6e12794cb5bb79081189c188f045ba72387b730d26feaa" Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.231008 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56c9bc6f5c-h4x5j"] Feb 02 07:05:54 crc kubenswrapper[4842]: I0202 07:05:54.252787 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.080117 4842 generic.go:334] "Generic (PLEG): container finished" podID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerID="82eafdb535c05f6b04556ae1baee492e7492a5e0fe1080d56e7f4182f6ac68b9" exitCode=0 Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.080644 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerDied","Data":"82eafdb535c05f6b04556ae1baee492e7492a5e0fe1080d56e7f4182f6ac68b9"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.090553 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerStarted","Data":"39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.113928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerStarted","Data":"17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.113970 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerStarted","Data":"c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.126724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerStarted","Data":"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130254 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerStarted","Data":"f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130284 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerStarted","Data":"6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130297 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.130308 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerStarted","Data":"c685a8dc8410d6a7a79b5205dd3ff23339631326844f2a5b84578d841706238e"} Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.139461 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xh7mg" podStartSLOduration=15.139441171 podStartE2EDuration="15.139441171s" podCreationTimestamp="2026-02-02 07:05:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:55.122458983 +0000 UTC m=+1180.499726895" watchObservedRunningTime="2026-02-02 07:05:55.139441171 +0000 UTC m=+1180.516709083" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.154597 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b469b995b-npwfd" podStartSLOduration=3.154579634 podStartE2EDuration="3.154579634s" podCreationTimestamp="2026-02-02 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:55.14871888 +0000 UTC m=+1180.525986792" watchObservedRunningTime="2026-02-02 07:05:55.154579634 +0000 UTC m=+1180.531847546" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.270647 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:05:55 crc kubenswrapper[4842]: E0202 07:05:55.270989 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.271001 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" Feb 02 07:05:55 crc kubenswrapper[4842]: E0202 07:05:55.271012 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="init" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.271017 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="init" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.271177 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e793f6a1-ed49-496a-af57-84d696daf728" containerName="dnsmasq-dns" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.279614 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.282730 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.287960 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.290701 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334416 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334678 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334761 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334785 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334855 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.334891 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436276 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436344 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436499 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436528 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436551 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.436582 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.444586 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.444862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.445432 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.447955 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.450197 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.450416 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.454802 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"neutron-6fcc587c45-x7h24\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.457777 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e793f6a1-ed49-496a-af57-84d696daf728" path="/var/lib/kubelet/pods/e793f6a1-ed49-496a-af57-84d696daf728/volumes" Feb 02 07:05:55 crc kubenswrapper[4842]: I0202 07:05:55.602263 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:56 crc kubenswrapper[4842]: I0202 07:05:56.175838 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerStarted","Data":"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8"} Feb 02 07:05:56 crc kubenswrapper[4842]: I0202 07:05:56.209570 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.209552501 podStartE2EDuration="5.209552501s" podCreationTimestamp="2026-02-02 07:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:56.203870811 +0000 UTC m=+1181.581138723" watchObservedRunningTime="2026-02-02 07:05:56.209552501 +0000 UTC m=+1181.586820413" Feb 02 07:05:56 crc kubenswrapper[4842]: I0202 07:05:56.554543 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.191354 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerStarted","Data":"224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.201085 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.214840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerStarted","Data":"05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.215831 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.217858 4842 generic.go:334] "Generic (PLEG): container finished" podID="fff8a308-89ab-409f-9053-6a363794df83" containerID="5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb" exitCode=0 Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.217916 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerDied","Data":"5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221540 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerStarted","Data":"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerStarted","Data":"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerStarted","Data":"6baf18e2465586bae82b31b897e8d4dfb75242a3b157fb93fe3a29ff487cbf1b"} Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.221684 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.228081 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.22805984 podStartE2EDuration="5.22805984s" podCreationTimestamp="2026-02-02 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:57.209533334 +0000 UTC m=+1182.586801266" watchObservedRunningTime="2026-02-02 07:05:57.22805984 +0000 UTC m=+1182.605327752" Feb 02 07:05:57 crc kubenswrapper[4842]: I0202 07:05:57.244136 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" podStartSLOduration=5.244115296 podStartE2EDuration="5.244115296s" podCreationTimestamp="2026-02-02 07:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:57.230527701 +0000 UTC m=+1182.607795613" watchObservedRunningTime="2026-02-02 07:05:57.244115296 +0000 UTC m=+1182.621383208" Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.234270 4842 generic.go:334] "Generic (PLEG): container finished" podID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerID="39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18" exitCode=0 Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.234351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerDied","Data":"39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18"} Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.237233 4842 generic.go:334] "Generic (PLEG): container finished" podID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerID="c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8" exitCode=0 Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.237239 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerDied","Data":"c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8"} Feb 02 07:05:58 crc kubenswrapper[4842]: I0202 07:05:58.252718 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fcc587c45-x7h24" podStartSLOduration=3.252700491 podStartE2EDuration="3.252700491s" podCreationTimestamp="2026-02-02 07:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:05:57.276992136 +0000 UTC m=+1182.654260118" watchObservedRunningTime="2026-02-02 07:05:58.252700491 +0000 UTC m=+1183.629968393" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.169034 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.174441 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.209669 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.253804 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.253907 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.253935 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254436 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254618 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254674 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254766 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254827 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") pod \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254916 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.254990 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255038 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") pod \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\" (UID: \"226a55ec-a7c1-4c34-953c-bb4e549b0fc5\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255069 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") pod \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255108 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") pod \"fff8a308-89ab-409f-9053-6a363794df83\" (UID: \"fff8a308-89ab-409f-9053-6a363794df83\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.255145 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") pod \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\" (UID: \"80249ec8-3d5a-4020-bed2-83b8ecd32ab9\") " Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.261286 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs" (OuterVolumeSpecName: "logs") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.266116 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ddsf" event={"ID":"fff8a308-89ab-409f-9053-6a363794df83","Type":"ContainerDied","Data":"7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33"} Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.266175 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cc030eb3eb4272b409ce92adc2a7190b5a997425fe481081c2cb7830167dd33" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.266278 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ddsf" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.267459 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8" (OuterVolumeSpecName: "kube-api-access-nz6l8") pod "80249ec8-3d5a-4020-bed2-83b8ecd32ab9" (UID: "80249ec8-3d5a-4020-bed2-83b8ecd32ab9"). InnerVolumeSpecName "kube-api-access-nz6l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.268306 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xh7mg" event={"ID":"226a55ec-a7c1-4c34-953c-bb4e549b0fc5","Type":"ContainerDied","Data":"1c28118337b87470e336f30ccbda4bc135a7ba7f7cf6293ce8d7b2e21bac07df"} Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.268347 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c28118337b87470e336f30ccbda4bc135a7ba7f7cf6293ce8d7b2e21bac07df" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.268413 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xh7mg" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272537 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-sjstk" event={"ID":"80249ec8-3d5a-4020-bed2-83b8ecd32ab9","Type":"ContainerDied","Data":"cd2d0997e2cc127c80bb06f907a598f4209b55d656a3634a4391e4cc9d674026"} Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272581 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2d0997e2cc127c80bb06f907a598f4209b55d656a3634a4391e4cc9d674026" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272650 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-sjstk" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.272648 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.284392 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts" (OuterVolumeSpecName: "scripts") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.284415 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts" (OuterVolumeSpecName: "scripts") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.284763 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4" (OuterVolumeSpecName: "kube-api-access-h92g4") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "kube-api-access-h92g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.288499 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm" (OuterVolumeSpecName: "kube-api-access-672bm") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "kube-api-access-672bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.288616 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.288647 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "80249ec8-3d5a-4020-bed2-83b8ecd32ab9" (UID: "80249ec8-3d5a-4020-bed2-83b8ecd32ab9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.293081 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.301410 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data" (OuterVolumeSpecName: "config-data") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.303642 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data" (OuterVolumeSpecName: "config-data") pod "226a55ec-a7c1-4c34-953c-bb4e549b0fc5" (UID: "226a55ec-a7c1-4c34-953c-bb4e549b0fc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.328509 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fff8a308-89ab-409f-9053-6a363794df83" (UID: "fff8a308-89ab-409f-9053-6a363794df83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.332911 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80249ec8-3d5a-4020-bed2-83b8ecd32ab9" (UID: "80249ec8-3d5a-4020-bed2-83b8ecd32ab9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356914 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356937 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h92g4\" (UniqueName: \"kubernetes.io/projected/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-kube-api-access-h92g4\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356947 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356955 4842 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356964 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-672bm\" (UniqueName: \"kubernetes.io/projected/fff8a308-89ab-409f-9053-6a363794df83-kube-api-access-672bm\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356972 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6l8\" (UniqueName: \"kubernetes.io/projected/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-kube-api-access-nz6l8\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356980 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356987 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fff8a308-89ab-409f-9053-6a363794df83-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.356995 4842 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357002 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357010 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fff8a308-89ab-409f-9053-6a363794df83-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357017 4842 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/80249ec8-3d5a-4020-bed2-83b8ecd32ab9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357025 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:01 crc kubenswrapper[4842]: I0202 07:06:01.357034 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/226a55ec-a7c1-4c34-953c-bb4e549b0fc5-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.313724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921"} Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.370854 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.370898 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.389424 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.389660 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.397614 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:06:02 crc kubenswrapper[4842]: E0202 07:06:02.398019 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerName="keystone-bootstrap" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398039 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerName="keystone-bootstrap" Feb 02 07:06:02 crc kubenswrapper[4842]: E0202 07:06:02.398060 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerName="barbican-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398069 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerName="barbican-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: E0202 07:06:02.398106 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff8a308-89ab-409f-9053-6a363794df83" containerName="placement-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398115 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff8a308-89ab-409f-9053-6a363794df83" containerName="placement-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398351 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" containerName="keystone-bootstrap" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398373 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff8a308-89ab-409f-9053-6a363794df83" containerName="placement-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.398409 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" containerName="barbican-db-sync" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.399016 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.403936 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404366 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404580 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6drft" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404724 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404848 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.404958 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.425701 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.440476 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.441825 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.444795 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.450693 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.450991 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.451292 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.451430 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rf5dt" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.451600 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.457794 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.481929 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.481981 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482015 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482074 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482096 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482117 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482138 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.482164 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.506788 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.521490 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.528414 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.560916 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.569339 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.580721 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-drtzj" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.580964 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.581119 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593083 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593166 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593202 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593253 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593285 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593311 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593403 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593427 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593512 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593551 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593619 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593708 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593729 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593787 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.593838 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.600148 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.601159 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.601886 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.612401 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.613120 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.624014 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.626306 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.629321 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.629390 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.630289 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.633017 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.676354 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.685093 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"keystone-cd7d86b6c-rcdjq\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705436 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705505 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705526 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705542 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705568 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705593 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705693 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705717 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705746 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705771 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705798 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705832 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705849 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705873 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.705922 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.707999 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.711564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.715747 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.715787 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.728606 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.747602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.749565 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.754561 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"placement-697d496d6b-bz7zg\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.786080 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808696 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808734 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808761 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808798 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808856 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808877 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808895 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808908 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808926 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.808955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.812864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.817507 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.818408 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.820538 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.826023 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.829909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.829973 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.830184 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" containerID="cri-o://05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550" gracePeriod=10 Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.834011 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.835178 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.836554 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.847751 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"barbican-worker-cdc46cdfc-px7hq\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.850669 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.851741 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"barbican-keystone-listener-69f5f7d66b-p2q6s\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.852167 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.870483 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.925665 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.927120 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.934346 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.936201 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.948958 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.971537 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:06:02 crc kubenswrapper[4842]: I0202 07:06:02.985619 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.001827 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.003428 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.010126 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.012281 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020556 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020873 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.020982 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021098 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021212 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021355 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021506 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.021634 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022526 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022841 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.022928 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.023012 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.023170 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.032705 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.032823 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.038988 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.123646 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124629 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124663 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124682 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124699 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124718 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124735 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124751 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124769 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124786 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124808 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124888 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124909 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124924 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124939 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124956 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124973 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.124991 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125011 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125034 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125051 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125068 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125094 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125115 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125140 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125157 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.125173 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.128867 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.129460 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.129542 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130088 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130098 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130607 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.130803 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.135557 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.138021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.140181 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.142439 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.144714 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.150953 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.151472 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.153021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"barbican-keystone-listener-77c4859bf4-qzmpm\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.161764 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"barbican-worker-57cc9f4749-jxzrq\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.176972 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"dnsmasq-dns-7bdf86f46f-hdddb\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.193137 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.201836 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226560 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226595 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226632 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226669 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226699 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226716 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226735 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226753 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226778 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226815 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226844 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.226866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231638 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231923 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.231937 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.235429 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.235862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.236019 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.237423 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.238075 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.240974 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.248236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"placement-5b5c67fdbd-zsx96\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.251666 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"barbican-api-578f976b4-mj2qx\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.332510 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.336422 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerStarted","Data":"0a8707912ffa5b95a33e852a86d3ad76fb5ed5f7a33153be252e8d6c15cbbb8d"} Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.346808 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.350687 4842 generic.go:334] "Generic (PLEG): container finished" podID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerID="05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550" exitCode=0 Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351662 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerDied","Data":"05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550"} Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351758 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351775 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.351925 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.367135 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.381408 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.417874 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.536996 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537321 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537417 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537448 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537491 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.537520 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") pod \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\" (UID: \"8c0bd1b2-3ffe-443f-b632-b44ed96afc30\") " Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.552394 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq" (OuterVolumeSpecName: "kube-api-access-f5wjq") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "kube-api-access-f5wjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.610270 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.643648 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5wjq\" (UniqueName: \"kubernetes.io/projected/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-kube-api-access-f5wjq\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.644260 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:03 crc kubenswrapper[4842]: W0202 07:06:03.686443 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948096a2_7fcf_4cb1_90da_90f3edbfd95b.slice/crio-c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741 WatchSource:0}: Error finding container c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741: Status 404 returned error can't find the container with id c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741 Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.693959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config" (OuterVolumeSpecName: "config") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.726945 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.728943 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.749731 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.750034 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.750048 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.751644 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.759715 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c0bd1b2-3ffe-443f-b632-b44ed96afc30" (UID: "8c0bd1b2-3ffe-443f-b632-b44ed96afc30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.852199 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.852248 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c0bd1b2-3ffe-443f-b632-b44ed96afc30-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.890063 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.913317 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:03 crc kubenswrapper[4842]: I0202 07:06:03.935125 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:06:03 crc kubenswrapper[4842]: W0202 07:06:03.936263 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595bc2a4_f0b8_4930_8c66_3b3da4cc4630.slice/crio-f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d WatchSource:0}: Error finding container f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d: Status 404 returned error can't find the container with id f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.105286 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.191240 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.225286 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:04 crc kubenswrapper[4842]: W0202 07:06:04.250292 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac50621f_67cd_441d_99ea_6839f7f3b556.slice/crio-5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873 WatchSource:0}: Error finding container 5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873: Status 404 returned error can't find the container with id 5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873 Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.371381 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerStarted","Data":"3841fc7dcb9ce569457a802c09c27ff59529bd2560831414d8333da874fb2c77"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.375084 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerStarted","Data":"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.376145 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.380004 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerStarted","Data":"5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.395837 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" event={"ID":"8c0bd1b2-3ffe-443f-b632-b44ed96afc30","Type":"ContainerDied","Data":"cce78954b1aa2e246ca2d16f8b3a27b68612df254d83dcbe0635ca9b3466aaa0"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.395886 4842 scope.go:117] "RemoveContainer" containerID="05833980aa0f3fcdb343d056348768c4e89e806dedb21d7281e2de92eb4da550" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.396019 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b9c8b59c-jsqpk" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.399056 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cd7d86b6c-rcdjq" podStartSLOduration=2.399038335 podStartE2EDuration="2.399038335s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:04.39560866 +0000 UTC m=+1189.772876632" watchObservedRunningTime="2026-02-02 07:06:04.399038335 +0000 UTC m=+1189.776306247" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.402083 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerStarted","Data":"1a2fdbaaf7cba0dd3058c59daa47fefc2d3624684698fe684e8a50e2db075890"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.416238 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerStarted","Data":"33a7212242745098719539d77d7d2ab10cc0d6841f34ba8ac2dabc8a942c26b5"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.437764 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerStarted","Data":"c4839ac05fedf9ceb883263b26b3f9a42e354a5742d5701bc345aed976299c03"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.440460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerStarted","Data":"eb1c879ce0521868ffea7d5ca4ba1e741e4b7c55bb4a6410da53f5413323bc74"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.443964 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerStarted","Data":"f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.466363 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerStarted","Data":"c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741"} Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.499300 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.506578 4842 scope.go:117] "RemoveContainer" containerID="82eafdb535c05f6b04556ae1baee492e7492a5e0fe1080d56e7f4182f6ac68b9" Feb 02 07:06:04 crc kubenswrapper[4842]: I0202 07:06:04.510085 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b9c8b59c-jsqpk"] Feb 02 07:06:04 crc kubenswrapper[4842]: E0202 07:06:04.748820 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595bc2a4_f0b8_4930_8c66_3b3da4cc4630.slice/crio-b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0bd1b2_3ffe_443f_b632_b44ed96afc30.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.452191 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" path="/var/lib/kubelet/pods/8c0bd1b2-3ffe-443f-b632-b44ed96afc30/volumes" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.499782 4842 generic.go:334] "Generic (PLEG): container finished" podID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerID="b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2" exitCode=0 Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.499833 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerDied","Data":"b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.523548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerStarted","Data":"c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.523589 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerStarted","Data":"6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.524391 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.524417 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.543364 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerStarted","Data":"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.543403 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerStarted","Data":"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.544089 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.544117 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549240 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerStarted","Data":"589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549266 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerStarted","Data":"2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777"} Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549734 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.549759 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.552560 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.552577 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.632702 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-697d496d6b-bz7zg" podStartSLOduration=3.632685414 podStartE2EDuration="3.632685414s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:05.632103059 +0000 UTC m=+1191.009370961" watchObservedRunningTime="2026-02-02 07:06:05.632685414 +0000 UTC m=+1191.009953326" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.662403 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b5c67fdbd-zsx96" podStartSLOduration=3.662386345 podStartE2EDuration="3.662386345s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:05.65403657 +0000 UTC m=+1191.031304492" watchObservedRunningTime="2026-02-02 07:06:05.662386345 +0000 UTC m=+1191.039654257" Feb 02 07:06:05 crc kubenswrapper[4842]: I0202 07:06:05.702675 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-578f976b4-mj2qx" podStartSLOduration=3.702656567 podStartE2EDuration="3.702656567s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:05.682494051 +0000 UTC m=+1191.059761963" watchObservedRunningTime="2026-02-02 07:06:05.702656567 +0000 UTC m=+1191.079924469" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.054881 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.056379 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.086772 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.086898 4842 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.089498 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.286606 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:06:06 crc kubenswrapper[4842]: E0202 07:06:06.286986 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="init" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.287003 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="init" Feb 02 07:06:06 crc kubenswrapper[4842]: E0202 07:06:06.287030 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.287036 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.287200 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c0bd1b2-3ffe-443f-b632-b44ed96afc30" containerName="dnsmasq-dns" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.288081 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.293023 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.294174 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.315017 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.422901 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423183 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423231 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423284 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423317 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423345 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.423368 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525190 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525240 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525297 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525327 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525358 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.525384 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.528483 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.534826 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.535564 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.536815 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.537350 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.541283 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.556864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"barbican-api-5cc5c967fd-w6ljx\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:06 crc kubenswrapper[4842]: I0202 07:06:06.619824 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:07 crc kubenswrapper[4842]: I0202 07:06:07.876315 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.610466 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerStarted","Data":"e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.610817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerStarted","Data":"a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.624012 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerStarted","Data":"548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.624055 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerStarted","Data":"70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.640411 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" podStartSLOduration=3.036503608 podStartE2EDuration="6.640392293s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:03.711445167 +0000 UTC m=+1189.088713079" lastFinishedPulling="2026-02-02 07:06:07.315333852 +0000 UTC m=+1192.692601764" observedRunningTime="2026-02-02 07:06:08.629759191 +0000 UTC m=+1194.007027103" watchObservedRunningTime="2026-02-02 07:06:08.640392293 +0000 UTC m=+1194.017660205" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.649250 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerStarted","Data":"36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.649306 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerStarted","Data":"2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.661179 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-cdc46cdfc-px7hq" podStartSLOduration=3.232897435 podStartE2EDuration="6.661158344s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:03.915085593 +0000 UTC m=+1189.292353515" lastFinishedPulling="2026-02-02 07:06:07.343346512 +0000 UTC m=+1192.720614424" observedRunningTime="2026-02-02 07:06:08.649341323 +0000 UTC m=+1194.026609245" watchObservedRunningTime="2026-02-02 07:06:08.661158344 +0000 UTC m=+1194.038426246" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.667411 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerStarted","Data":"aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.667451 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerStarted","Data":"5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.692197 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerStarted","Data":"83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.692255 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerStarted","Data":"d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.692275 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerStarted","Data":"fd6b7a98a2a46a28710ac379918018f758437a367de16692a4e1403ffd79ebbd"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.693363 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.693398 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.707980 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-57cc9f4749-jxzrq" podStartSLOduration=3.607139894 podStartE2EDuration="6.707957267s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:04.249749927 +0000 UTC m=+1189.627017839" lastFinishedPulling="2026-02-02 07:06:07.3505673 +0000 UTC m=+1192.727835212" observedRunningTime="2026-02-02 07:06:08.679744652 +0000 UTC m=+1194.057012584" watchObservedRunningTime="2026-02-02 07:06:08.707957267 +0000 UTC m=+1194.085225179" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.734345 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerStarted","Data":"053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117"} Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.734745 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.747549 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.747700 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" podStartSLOduration=3.378048641 podStartE2EDuration="6.747689916s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="2026-02-02 07:06:03.945658006 +0000 UTC m=+1189.322925918" lastFinishedPulling="2026-02-02 07:06:07.315299291 +0000 UTC m=+1192.692567193" observedRunningTime="2026-02-02 07:06:08.716898458 +0000 UTC m=+1194.094166370" watchObservedRunningTime="2026-02-02 07:06:08.747689916 +0000 UTC m=+1194.124957828" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.786842 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.799492 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podStartSLOduration=2.799474052 podStartE2EDuration="2.799474052s" podCreationTimestamp="2026-02-02 07:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:08.748976688 +0000 UTC m=+1194.126244600" watchObservedRunningTime="2026-02-02 07:06:08.799474052 +0000 UTC m=+1194.176741964" Feb 02 07:06:08 crc kubenswrapper[4842]: I0202 07:06:08.808827 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" podStartSLOduration=6.808815492 podStartE2EDuration="6.808815492s" podCreationTimestamp="2026-02-02 07:06:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:08.774166498 +0000 UTC m=+1194.151434410" watchObservedRunningTime="2026-02-02 07:06:08.808815492 +0000 UTC m=+1194.186083394" Feb 02 07:06:09 crc kubenswrapper[4842]: I0202 07:06:09.747382 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerStarted","Data":"d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081"} Feb 02 07:06:09 crc kubenswrapper[4842]: I0202 07:06:09.768272 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-phj68" podStartSLOduration=3.395597675 podStartE2EDuration="41.768251395s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.664886083 +0000 UTC m=+1155.042153995" lastFinishedPulling="2026-02-02 07:06:08.037539803 +0000 UTC m=+1193.414807715" observedRunningTime="2026-02-02 07:06:09.767832955 +0000 UTC m=+1195.145100867" watchObservedRunningTime="2026-02-02 07:06:09.768251395 +0000 UTC m=+1195.145519307" Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.755830 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-cdc46cdfc-px7hq" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" containerID="cri-o://548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319" gracePeriod=30 Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.755885 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-cdc46cdfc-px7hq" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" containerID="cri-o://70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1" gracePeriod=30 Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.756191 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" containerID="cri-o://e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6" gracePeriod=30 Feb 02 07:06:10 crc kubenswrapper[4842]: I0202 07:06:10.756129 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" containerID="cri-o://a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d" gracePeriod=30 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.773915 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerID="548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319" exitCode=0 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.774895 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerID="70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1" exitCode=143 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.774904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerDied","Data":"548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319"} Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.775321 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerDied","Data":"70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1"} Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.777535 4842 generic.go:334] "Generic (PLEG): container finished" podID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerID="e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6" exitCode=0 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.777790 4842 generic.go:334] "Generic (PLEG): container finished" podID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerID="a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d" exitCode=143 Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.777748 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerDied","Data":"e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6"} Feb 02 07:06:11 crc kubenswrapper[4842]: I0202 07:06:11.778076 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerDied","Data":"a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d"} Feb 02 07:06:12 crc kubenswrapper[4842]: I0202 07:06:12.793629 4842 generic.go:334] "Generic (PLEG): container finished" podID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerID="d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081" exitCode=0 Feb 02 07:06:12 crc kubenswrapper[4842]: I0202 07:06:12.793730 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerDied","Data":"d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081"} Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.195427 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.326899 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.327103 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" containerID="cri-o://070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17" gracePeriod=10 Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.803722 4842 generic.go:334] "Generic (PLEG): container finished" podID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerID="070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17" exitCode=0 Feb 02 07:06:13 crc kubenswrapper[4842]: I0202 07:06:13.803817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerDied","Data":"070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.321006 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.395240 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414364 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414550 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414616 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414658 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.414705 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") pod \"0d385ecd-3bd8-41cf-814b-6409c426dc80\" (UID: \"0d385ecd-3bd8-41cf-814b-6409c426dc80\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.420037 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.423901 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s" (OuterVolumeSpecName: "kube-api-access-b767s") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "kube-api-access-b767s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.428765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs" (OuterVolumeSpecName: "logs") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.467516 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.472427 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.505541 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data" (OuterVolumeSpecName: "config-data") pod "0d385ecd-3bd8-41cf-814b-6409c426dc80" (UID: "0d385ecd-3bd8-41cf-814b-6409c426dc80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519181 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519264 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519281 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d385ecd-3bd8-41cf-814b-6409c426dc80-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519293 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d385ecd-3bd8-41cf-814b-6409c426dc80-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.519305 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b767s\" (UniqueName: \"kubernetes.io/projected/0d385ecd-3bd8-41cf-814b-6409c426dc80-kube-api-access-b767s\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.520536 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.590879 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623572 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623610 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623627 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623650 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623676 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623745 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623763 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623810 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623833 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") pod \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\" (UID: \"948096a2-7fcf-4cb1-90da-90f3edbfd95b\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.623879 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") pod \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\" (UID: \"d9f1c72e-953b-45ba-ba69-c7574f82e8ad\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.627891 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.628325 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2" (OuterVolumeSpecName: "kube-api-access-v4nz2") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "kube-api-access-v4nz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.629050 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs" (OuterVolumeSpecName: "logs") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.629607 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.631548 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts" (OuterVolumeSpecName: "scripts") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.634109 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7" (OuterVolumeSpecName: "kube-api-access-l6zc7") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "kube-api-access-l6zc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.636124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.658004 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.690358 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.696859 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data" (OuterVolumeSpecName: "config-data") pod "d9f1c72e-953b-45ba-ba69-c7574f82e8ad" (UID: "d9f1c72e-953b-45ba-ba69-c7574f82e8ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.698881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data" (OuterVolumeSpecName: "config-data") pod "948096a2-7fcf-4cb1-90da-90f3edbfd95b" (UID: "948096a2-7fcf-4cb1-90da-90f3edbfd95b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726298 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726401 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726463 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726491 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726520 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726597 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") pod \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\" (UID: \"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49\") " Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726940 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726955 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726966 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726975 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726984 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.726993 4842 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727000 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727008 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/948096a2-7fcf-4cb1-90da-90f3edbfd95b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727016 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948096a2-7fcf-4cb1-90da-90f3edbfd95b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727024 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zc7\" (UniqueName: \"kubernetes.io/projected/948096a2-7fcf-4cb1-90da-90f3edbfd95b-kube-api-access-l6zc7\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.727031 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4nz2\" (UniqueName: \"kubernetes.io/projected/d9f1c72e-953b-45ba-ba69-c7574f82e8ad-kube-api-access-v4nz2\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.729309 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw" (OuterVolumeSpecName: "kube-api-access-trrjw") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "kube-api-access-trrjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.762648 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.766814 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.773262 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.779854 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.789595 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config" (OuterVolumeSpecName: "config") pod "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" (UID: "cc29f5ed-e410-4d0a-ae66-ab78c89c6a49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.815710 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-cdc46cdfc-px7hq" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.815704 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-cdc46cdfc-px7hq" event={"ID":"0d385ecd-3bd8-41cf-814b-6409c426dc80","Type":"ContainerDied","Data":"c4839ac05fedf9ceb883263b26b3f9a42e354a5742d5701bc345aed976299c03"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.816002 4842 scope.go:117] "RemoveContainer" containerID="548af5f52aef73dc458ca274a43620dc086905dcd5fa415ca36e93646aa7f319" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.822822 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerStarted","Data":"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.822926 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" containerID="cri-o://2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.822965 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.823046 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" containerID="cri-o://fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.823087 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" containerID="cri-o://46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.823117 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" containerID="cri-o://489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" gracePeriod=30 Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.830966 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trrjw\" (UniqueName: \"kubernetes.io/projected/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-kube-api-access-trrjw\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831013 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831033 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831053 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831070 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831086 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831800 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.831799 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-69f5f7d66b-p2q6s" event={"ID":"948096a2-7fcf-4cb1-90da-90f3edbfd95b","Type":"ContainerDied","Data":"c3712df80cf8e090f8874f31414aef8e53734ed43676c40d1bfb1fcb4a865741"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.834259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-phj68" event={"ID":"d9f1c72e-953b-45ba-ba69-c7574f82e8ad","Type":"ContainerDied","Data":"e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.834287 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0942641dc8319ec78eeb7f961a7a30b1fb70ac7a621c74e1e520f1227c8c704" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.835749 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.840493 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.844110 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" event={"ID":"cc29f5ed-e410-4d0a-ae66-ab78c89c6a49","Type":"ContainerDied","Data":"3bf1c02d1eb4a6fd6bfb8e0d7089ca1be72bb9eccd12b09bde66e78b797862a2"} Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.844159 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dc4fcdbc-b8t4s" Feb 02 07:06:14 crc kubenswrapper[4842]: I0202 07:06:14.852341 4842 scope.go:117] "RemoveContainer" containerID="70dea933b5cdfdaa531d37f7f6f82a6195fd31c430a47a6f0a2ae7fa37c9d4a1" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.899738 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.396628368 podStartE2EDuration="46.89972034s" podCreationTimestamp="2026-02-02 07:05:28 +0000 UTC" firstStartedPulling="2026-02-02 07:05:29.8194449 +0000 UTC m=+1155.196712812" lastFinishedPulling="2026-02-02 07:06:14.322536852 +0000 UTC m=+1199.699804784" observedRunningTime="2026-02-02 07:06:14.853388199 +0000 UTC m=+1200.230656111" watchObservedRunningTime="2026-02-02 07:06:14.89972034 +0000 UTC m=+1200.276988252" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.926599 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.946751 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-cdc46cdfc-px7hq"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:14.972001 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.097566 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098015 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098032 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098054 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098060 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098069 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098074 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098086 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098092 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098104 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098110 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098117 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="init" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098122 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="init" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.098131 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerName="cinder-db-sync" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098137 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerName="cinder-db-sync" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098309 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" containerName="dnsmasq-dns" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098323 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098340 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098353 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" containerName="barbican-keystone-listener" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098362 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" containerName="cinder-db-sync" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.098368 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" containerName="barbican-worker-log" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.099197 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.108382 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.110087 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.113508 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.113745 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.113868 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.114004 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fr64b" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.120542 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.144375 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154352 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154390 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154424 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154460 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154481 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154554 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154595 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154611 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154631 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154648 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154675 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.154803 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.235287 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.250045 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.252448 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255538 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255573 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255604 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255623 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255669 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255683 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255700 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255730 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255750 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255773 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255792 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255807 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255833 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255850 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255871 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255921 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.255941 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.256544 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.260021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.260873 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.261030 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.261758 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.262890 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.264801 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.267971 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.272877 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.274874 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.289176 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.301339 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"cinder-scheduler-0\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.304977 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"dnsmasq-dns-75bfc9b94f-zwbb4\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356643 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356710 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356747 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356778 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356825 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356846 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.356864 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.357157 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.357356 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.359043 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.361186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.362523 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.363748 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.377791 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.378073 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"cinder-api-0\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.448699 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d385ecd-3bd8-41cf-814b-6409c426dc80" path="/var/lib/kubelet/pods/0d385ecd-3bd8-41cf-814b-6409c426dc80/volumes" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.503921 4842 scope.go:117] "RemoveContainer" containerID="e8efd3297967419921167c81ce13173df87124973698c673eee48fbd93fc77f6" Feb 02 07:06:15 crc kubenswrapper[4842]: E0202 07:06:15.516197 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d385ecd_3bd8_41cf_814b_6409c426dc80.slice/crio-c4839ac05fedf9ceb883263b26b3f9a42e354a5742d5701bc345aed976299c03\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d385ecd_3bd8_41cf_814b_6409c426dc80.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-conmon-fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-conmon-2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948096a2_7fcf_4cb1_90da_90f3edbfd95b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7aab5ec_829b_42dd_89db_74e28ab9346e.slice/crio-2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.548377 4842 scope.go:117] "RemoveContainer" containerID="a9547f640289b42444ca3a2a681d28cab4c4b05c2a274ac2247b743a8a11044d" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.549698 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.550487 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fr64b" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.560675 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.562310 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.562476 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.590545 4842 scope.go:117] "RemoveContainer" containerID="070ececa81450530af921167c87446de2343f6f27873a844bed7018478edcd17" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.598895 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-69f5f7d66b-p2q6s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.608342 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.622271 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dc4fcdbc-b8t4s"] Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.660694 4842 scope.go:117] "RemoveContainer" containerID="b65de85796493b7fd1d1b4d84ddbf8a0d1cb6cbceca0fba243ff835d64eb5002" Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858633 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" exitCode=0 Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858839 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0"} Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858891 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921"} Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858859 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" exitCode=2 Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858915 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" exitCode=0 Feb 02 07:06:15 crc kubenswrapper[4842]: I0202 07:06:15.858960 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.092345 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:06:16 crc kubenswrapper[4842]: W0202 07:06:16.096575 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e3c4cab_c86f_4819_8d09_ac45ccb6ea16.slice/crio-1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c WatchSource:0}: Error finding container 1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c: Status 404 returned error can't find the container with id 1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c Feb 02 07:06:16 crc kubenswrapper[4842]: W0202 07:06:16.143237 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccb5d691_9421_4007_8184_b3885f746622.slice/crio-0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac WatchSource:0}: Error finding container 0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac: Status 404 returned error can't find the container with id 0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.156998 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.207127 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:16 crc kubenswrapper[4842]: W0202 07:06:16.214351 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd737380b_08d3_455f_a9a7_080d76cabc9f.slice/crio-448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db WatchSource:0}: Error finding container 448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db: Status 404 returned error can't find the container with id 448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.883926 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerStarted","Data":"448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.893606 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerStarted","Data":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.893650 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerStarted","Data":"0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.898557 4842 generic.go:334] "Generic (PLEG): container finished" podID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" exitCode=0 Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.898622 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerDied","Data":"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775"} Feb 02 07:06:16 crc kubenswrapper[4842]: I0202 07:06:16.898652 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerStarted","Data":"1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c"} Feb 02 07:06:17 crc kubenswrapper[4842]: I0202 07:06:17.194569 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:17 crc kubenswrapper[4842]: I0202 07:06:17.455004 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948096a2-7fcf-4cb1-90da-90f3edbfd95b" path="/var/lib/kubelet/pods/948096a2-7fcf-4cb1-90da-90f3edbfd95b/volumes" Feb 02 07:06:17 crc kubenswrapper[4842]: I0202 07:06:17.455891 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc29f5ed-e410-4d0a-ae66-ab78c89c6a49" path="/var/lib/kubelet/pods/cc29f5ed-e410-4d0a-ae66-ab78c89c6a49/volumes" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.160901 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.410946 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.469688 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.469899 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-578f976b4-mj2qx" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" containerID="cri-o://2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.470288 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-578f976b4-mj2qx" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" containerID="cri-o://589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.930928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerStarted","Data":"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.931282 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerStarted","Data":"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.934637 4842 generic.go:334] "Generic (PLEG): container finished" podID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerID="2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777" exitCode=143 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.934719 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerDied","Data":"2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.936616 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerStarted","Data":"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.936798 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938613 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerStarted","Data":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938697 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" containerID="cri-o://0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938798 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.938844 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" containerID="cri-o://63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" gracePeriod=30 Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.949073 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.312338565 podStartE2EDuration="3.949060909s" podCreationTimestamp="2026-02-02 07:06:15 +0000 UTC" firstStartedPulling="2026-02-02 07:06:16.217810629 +0000 UTC m=+1201.595078541" lastFinishedPulling="2026-02-02 07:06:16.854532973 +0000 UTC m=+1202.231800885" observedRunningTime="2026-02-02 07:06:18.9466823 +0000 UTC m=+1204.323950212" watchObservedRunningTime="2026-02-02 07:06:18.949060909 +0000 UTC m=+1204.326328811" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.972048 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" podStartSLOduration=3.972034945 podStartE2EDuration="3.972034945s" podCreationTimestamp="2026-02-02 07:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:18.970457716 +0000 UTC m=+1204.347725628" watchObservedRunningTime="2026-02-02 07:06:18.972034945 +0000 UTC m=+1204.349302857" Feb 02 07:06:18 crc kubenswrapper[4842]: I0202 07:06:18.991399 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.991384251 podStartE2EDuration="3.991384251s" podCreationTimestamp="2026-02-02 07:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:18.987878265 +0000 UTC m=+1204.365146207" watchObservedRunningTime="2026-02-02 07:06:18.991384251 +0000 UTC m=+1204.368652163" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.607111 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.675856 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.675971 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676012 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676046 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676196 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676301 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.676340 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") pod \"ccb5d691-9421-4007-8184-b3885f746622\" (UID: \"ccb5d691-9421-4007-8184-b3885f746622\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.677493 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs" (OuterVolumeSpecName: "logs") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.677544 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.690777 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts" (OuterVolumeSpecName: "scripts") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.690818 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.696805 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz" (OuterVolumeSpecName: "kube-api-access-thgcz") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "kube-api-access-thgcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.746363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.787818 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ccb5d691-9421-4007-8184-b3885f746622-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788002 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788084 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ccb5d691-9421-4007-8184-b3885f746622-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788157 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thgcz\" (UniqueName: \"kubernetes.io/projected/ccb5d691-9421-4007-8184-b3885f746622-kube-api-access-thgcz\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788320 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.788407 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.826354 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data" (OuterVolumeSpecName: "config-data") pod "ccb5d691-9421-4007-8184-b3885f746622" (UID: "ccb5d691-9421-4007-8184-b3885f746622"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.884861 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.890337 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb5d691-9421-4007-8184-b3885f746622-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948100 4842 generic.go:334] "Generic (PLEG): container finished" podID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" exitCode=0 Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948152 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948177 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e7aab5ec-829b-42dd-89db-74e28ab9346e","Type":"ContainerDied","Data":"7ea6f3db6a36a7dee937382b0699d18f0905deeb5700b93c12a3f06c02d6628f"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948192 4842 scope.go:117] "RemoveContainer" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.948316 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.952257 4842 generic.go:334] "Generic (PLEG): container finished" podID="ccb5d691-9421-4007-8184-b3885f746622" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" exitCode=0 Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.952274 4842 generic.go:334] "Generic (PLEG): container finished" podID="ccb5d691-9421-4007-8184-b3885f746622" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" exitCode=143 Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.952954 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.955792 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerDied","Data":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.955860 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerDied","Data":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.955876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ccb5d691-9421-4007-8184-b3885f746622","Type":"ContainerDied","Data":"0a559b7323dca0655523697c26ea9fa913f9065dad8b4f84d8e4b5e4851d5eac"} Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.975636 4842 scope.go:117] "RemoveContainer" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.991139 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.991591 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.991751 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992197 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992324 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992791 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.992889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993028 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993094 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") pod \"e7aab5ec-829b-42dd-89db-74e28ab9346e\" (UID: \"e7aab5ec-829b-42dd-89db-74e28ab9346e\") " Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993657 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.993695 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e7aab5ec-829b-42dd-89db-74e28ab9346e-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:19 crc kubenswrapper[4842]: I0202 07:06:19.999280 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576" (OuterVolumeSpecName: "kube-api-access-h2576") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "kube-api-access-h2576". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.002454 4842 scope.go:117] "RemoveContainer" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.002710 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts" (OuterVolumeSpecName: "scripts") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.006688 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.015384 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.035596 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.035986 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.035999 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036014 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036020 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036034 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036041 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036049 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036056 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036067 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036072 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.036094 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036100 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036260 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036275 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-central-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036286 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb5d691-9421-4007-8184-b3885f746622" containerName="cinder-api-log" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036295 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="sg-core" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036306 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="proxy-httpd" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.036317 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" containerName="ceilometer-notification-agent" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.037245 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.039845 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.039883 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.044925 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.040618 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.054112 4842 scope.go:117] "RemoveContainer" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.054557 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.088510 4842 scope.go:117] "RemoveContainer" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.089079 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0\": container with ID starting with fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0 not found: ID does not exist" containerID="fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089119 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0"} err="failed to get container status \"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0\": rpc error: code = NotFound desc = could not find container \"fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0\": container with ID starting with fe3375b909f92cb4bbe73dec1a8b9dd6bf271192a5cfdeabb2b30b199ea28fc0 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089148 4842 scope.go:117] "RemoveContainer" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.089515 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921\": container with ID starting with 46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921 not found: ID does not exist" containerID="46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089552 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921"} err="failed to get container status \"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921\": rpc error: code = NotFound desc = could not find container \"46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921\": container with ID starting with 46a4ec7b1a2bf914002a2bbd86c470d96a9acddcc7f5c8732c24027d3a07b921 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089581 4842 scope.go:117] "RemoveContainer" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.089841 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf\": container with ID starting with 489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf not found: ID does not exist" containerID="489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089877 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf"} err="failed to get container status \"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf\": rpc error: code = NotFound desc = could not find container \"489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf\": container with ID starting with 489c01ede4a0ab782872bdaed559698536c0754fc4c6b18af574f3dd700850cf not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.089898 4842 scope.go:117] "RemoveContainer" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.090195 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132\": container with ID starting with 2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132 not found: ID does not exist" containerID="2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.090226 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132"} err="failed to get container status \"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132\": rpc error: code = NotFound desc = could not find container \"2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132\": container with ID starting with 2f1f71359696d01a5862009ba293a284a700d2d113c3d648dd2fd55ef0a71132 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.090241 4842 scope.go:117] "RemoveContainer" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.095095 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2576\" (UniqueName: \"kubernetes.io/projected/e7aab5ec-829b-42dd-89db-74e28ab9346e-kube-api-access-h2576\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.095115 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.095124 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.101363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.110024 4842 scope.go:117] "RemoveContainer" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.124301 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data" (OuterVolumeSpecName: "config-data") pod "e7aab5ec-829b-42dd-89db-74e28ab9346e" (UID: "e7aab5ec-829b-42dd-89db-74e28ab9346e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.145413 4842 scope.go:117] "RemoveContainer" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.146071 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": container with ID starting with 63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad not found: ID does not exist" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146238 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} err="failed to get container status \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": rpc error: code = NotFound desc = could not find container \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": container with ID starting with 63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146265 4842 scope.go:117] "RemoveContainer" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: E0202 07:06:20.146553 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": container with ID starting with 0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547 not found: ID does not exist" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146574 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} err="failed to get container status \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": rpc error: code = NotFound desc = could not find container \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": container with ID starting with 0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146595 4842 scope.go:117] "RemoveContainer" containerID="63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146794 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad"} err="failed to get container status \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": rpc error: code = NotFound desc = could not find container \"63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad\": container with ID starting with 63e1e84eff6725e7d759565c31c07d276febdcc5bf224849869455d2415276ad not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.146814 4842 scope.go:117] "RemoveContainer" containerID="0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.147019 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547"} err="failed to get container status \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": rpc error: code = NotFound desc = could not find container \"0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547\": container with ID starting with 0f51b61eb0b0342769616aab9617a4eca111b893c1851374a624e3c13f613547 not found: ID does not exist" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196570 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196612 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196757 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196890 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.196958 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197033 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197259 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197385 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.197406 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7aab5ec-829b-42dd-89db-74e28ab9346e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.295917 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.299747 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.299889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.300634 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.300539 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301371 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301595 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301677 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301808 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.301931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.302085 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.303370 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.305916 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.306057 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.307547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.307964 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.308012 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.308990 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.326320 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"cinder-api-0\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.342255 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.368457 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.375596 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.380352 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.383852 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.384153 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.394550 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506543 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506584 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506754 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506778 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506828 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.506917 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.507043 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.563077 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608733 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608842 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608899 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608960 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.608985 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609010 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609060 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609467 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.609508 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.613023 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.615278 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.615550 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.615899 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.626862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"ceilometer-0\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.717158 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.834063 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:06:20 crc kubenswrapper[4842]: W0202 07:06:20.838973 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod900b2d20_01c8_47e0_8271_ccfd8549d468.slice/crio-f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76 WatchSource:0}: Error finding container f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76: Status 404 returned error can't find the container with id f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76 Feb 02 07:06:20 crc kubenswrapper[4842]: I0202 07:06:20.970278 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerStarted","Data":"f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.055577 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:21 crc kubenswrapper[4842]: W0202 07:06:21.060176 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0636bdd6_0d17_4f9b_9031_663dfb98f672.slice/crio-2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f WatchSource:0}: Error finding container 2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f: Status 404 returned error can't find the container with id 2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.454916 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb5d691-9421-4007-8184-b3885f746622" path="/var/lib/kubelet/pods/ccb5d691-9421-4007-8184-b3885f746622/volumes" Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.461739 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7aab5ec-829b-42dd-89db-74e28ab9346e" path="/var/lib/kubelet/pods/e7aab5ec-829b-42dd-89db-74e28ab9346e/volumes" Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.979621 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerStarted","Data":"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.981690 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.981731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f"} Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.983973 4842 generic.go:334] "Generic (PLEG): container finished" podID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerID="589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f" exitCode=0 Feb 02 07:06:21 crc kubenswrapper[4842]: I0202 07:06:21.984018 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerDied","Data":"589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f"} Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.249364 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.348564 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.348925 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs" (OuterVolumeSpecName: "logs") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.348935 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349059 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349103 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349151 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") pod \"ac50621f-67cd-441d-99ea-6839f7f3b556\" (UID: \"ac50621f-67cd-441d-99ea-6839f7f3b556\") " Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.349932 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac50621f-67cd-441d-99ea-6839f7f3b556-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.355608 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8" (OuterVolumeSpecName: "kube-api-access-xs4k8") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "kube-api-access-xs4k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.363771 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.399293 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.425332 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data" (OuterVolumeSpecName: "config-data") pod "ac50621f-67cd-441d-99ea-6839f7f3b556" (UID: "ac50621f-67cd-441d-99ea-6839f7f3b556"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.451759 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.451981 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.452091 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ac50621f-67cd-441d-99ea-6839f7f3b556-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:22 crc kubenswrapper[4842]: I0202 07:06:22.452176 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs4k8\" (UniqueName: \"kubernetes.io/projected/ac50621f-67cd-441d-99ea-6839f7f3b556-kube-api-access-xs4k8\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.001888 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerStarted","Data":"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab"} Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.002687 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.005274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c"} Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.006918 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-578f976b4-mj2qx" event={"ID":"ac50621f-67cd-441d-99ea-6839f7f3b556","Type":"ContainerDied","Data":"5c5a9a9e1c050c799b792ac4b78f2284f4eae1bc563dc03d2fe56329e1ad0873"} Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.006949 4842 scope.go:117] "RemoveContainer" containerID="589698e8022a3b189f2a3e9dad2ee18b515cc75e38ef79e256cca8b969f22e6f" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.007071 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-578f976b4-mj2qx" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.032804 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.032784124 podStartE2EDuration="4.032784124s" podCreationTimestamp="2026-02-02 07:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:23.026191002 +0000 UTC m=+1208.403458954" watchObservedRunningTime="2026-02-02 07:06:23.032784124 +0000 UTC m=+1208.410052036" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.054362 4842 scope.go:117] "RemoveContainer" containerID="2aaca1b2bb1165d98216c87b7292187d66c8775a2542b31141a6399a0f020777" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.086331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.099961 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-578f976b4-mj2qx"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.287967 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.446209 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" path="/var/lib/kubelet/pods/ac50621f-67cd-441d-99ea-6839f7f3b556/volumes" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.548252 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.548498 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" containerID="cri-o://b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" gracePeriod=30 Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.548752 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" containerID="cri-o://ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" gracePeriod=30 Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.553904 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": EOF" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.574804 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:06:23 crc kubenswrapper[4842]: E0202 07:06:23.575165 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575176 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" Feb 02 07:06:23 crc kubenswrapper[4842]: E0202 07:06:23.575192 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575198 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575375 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.575398 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac50621f-67cd-441d-99ea-6839f7f3b556" containerName="barbican-api-log" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.576269 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.591337 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679596 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679659 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679677 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679752 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679772 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679793 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.679813 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781482 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781535 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781553 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781632 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781651 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781670 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.781689 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.788375 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.790511 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.791814 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.794108 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.801899 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.805827 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.836032 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"neutron-6684555597-gjtgz\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:23 crc kubenswrapper[4842]: I0202 07:06:23.891222 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.034550 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9"} Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.038054 4842 generic.go:334] "Generic (PLEG): container finished" podID="3aaab28f-fb61-4600-b66f-a485ca345112" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" exitCode=0 Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.038194 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerDied","Data":"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775"} Feb 02 07:06:24 crc kubenswrapper[4842]: I0202 07:06:24.509676 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.077337 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerStarted","Data":"69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83"} Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.077734 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerStarted","Data":"679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca"} Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.077762 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerStarted","Data":"642e7ab1c818fa3e0857124b890ed7f6355271588ac21bdb99c64d978b7374b0"} Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.079475 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.113930 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6684555597-gjtgz" podStartSLOduration=2.113902098 podStartE2EDuration="2.113902098s" podCreationTimestamp="2026-02-02 07:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:25.105708796 +0000 UTC m=+1210.482976738" watchObservedRunningTime="2026-02-02 07:06:25.113902098 +0000 UTC m=+1210.491170050" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.551387 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.603274 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6fcc587c45-x7h24" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9696/\": dial tcp 10.217.0.152:9696: connect: connection refused" Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.721508 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:25 crc kubenswrapper[4842]: I0202 07:06:25.722006 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" containerID="cri-o://053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117" gracePeriod=10 Feb 02 07:06:25 crc kubenswrapper[4842]: E0202 07:06:25.882255 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595bc2a4_f0b8_4930_8c66_3b3da4cc4630.slice/crio-conmon-053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.063027 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.102236 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerStarted","Data":"9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04"} Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.105150 4842 generic.go:334] "Generic (PLEG): container finished" podID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerID="053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117" exitCode=0 Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.105235 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerDied","Data":"053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117"} Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.148375 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.148660 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" containerID="cri-o://2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" gracePeriod=30 Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.149152 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" containerID="cri-o://54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" gracePeriod=30 Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.169109 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.057823247 podStartE2EDuration="6.169084181s" podCreationTimestamp="2026-02-02 07:06:20 +0000 UTC" firstStartedPulling="2026-02-02 07:06:21.062433827 +0000 UTC m=+1206.439701739" lastFinishedPulling="2026-02-02 07:06:25.173694761 +0000 UTC m=+1210.550962673" observedRunningTime="2026-02-02 07:06:26.139449001 +0000 UTC m=+1211.516716913" watchObservedRunningTime="2026-02-02 07:06:26.169084181 +0000 UTC m=+1211.546352093" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.343161 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.461888 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.461983 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462050 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462121 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462164 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.462228 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") pod \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\" (UID: \"595bc2a4-f0b8-4930-8c66-3b3da4cc4630\") " Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.473353 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx" (OuterVolumeSpecName: "kube-api-access-r84hx") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "kube-api-access-r84hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.507090 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.512553 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.512817 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config" (OuterVolumeSpecName: "config") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.520801 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.531115 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "595bc2a4-f0b8-4930-8c66-3b3da4cc4630" (UID: "595bc2a4-f0b8-4930-8c66-3b3da4cc4630"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564839 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r84hx\" (UniqueName: \"kubernetes.io/projected/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-kube-api-access-r84hx\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564866 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564875 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564886 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564896 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:26 crc kubenswrapper[4842]: I0202 07:06:26.564906 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/595bc2a4-f0b8-4930-8c66-3b3da4cc4630-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.115812 4842 generic.go:334] "Generic (PLEG): container finished" podID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" exitCode=0 Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.115893 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerDied","Data":"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681"} Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.118727 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" event={"ID":"595bc2a4-f0b8-4930-8c66-3b3da4cc4630","Type":"ContainerDied","Data":"f735cc0c0ef98cb5751b1343a0d1aca16cf6fb764a0966b2ebc18ac2392a9b7d"} Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.118794 4842 scope.go:117] "RemoveContainer" containerID="053391fc9b848177ff3e50865d7e17cdfe73b462de9b2367e66796f0824df117" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.118942 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdf86f46f-hdddb" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.119130 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.142758 4842 scope.go:117] "RemoveContainer" containerID="b697a77798b314f9ac4ee3c53ca23704430e0f4eccb0fe586772468c61943fe2" Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.155840 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.164190 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdf86f46f-hdddb"] Feb 02 07:06:27 crc kubenswrapper[4842]: I0202 07:06:27.448627 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" path="/var/lib/kubelet/pods/595bc2a4-f0b8-4930-8c66-3b3da4cc4630/volumes" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.683898 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832790 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832839 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832892 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832947 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.832994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.833065 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") pod \"d737380b-08d3-455f-a9a7-080d76cabc9f\" (UID: \"d737380b-08d3-455f-a9a7-080d76cabc9f\") " Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.836765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.839580 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.840058 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts" (OuterVolumeSpecName: "scripts") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.842753 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng" (OuterVolumeSpecName: "kube-api-access-gw2ng") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "kube-api-access-gw2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.894319 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935115 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935143 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d737380b-08d3-455f-a9a7-080d76cabc9f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935153 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935161 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.935170 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw2ng\" (UniqueName: \"kubernetes.io/projected/d737380b-08d3-455f-a9a7-080d76cabc9f-kube-api-access-gw2ng\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:29 crc kubenswrapper[4842]: I0202 07:06:29.965502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data" (OuterVolumeSpecName: "config-data") pod "d737380b-08d3-455f-a9a7-080d76cabc9f" (UID: "d737380b-08d3-455f-a9a7-080d76cabc9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.037199 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d737380b-08d3-455f-a9a7-080d76cabc9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146753 4842 generic.go:334] "Generic (PLEG): container finished" podID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" exitCode=0 Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146796 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerDied","Data":"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b"} Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d737380b-08d3-455f-a9a7-080d76cabc9f","Type":"ContainerDied","Data":"448240e5421a87237dad04890b2a4f40bc671d8ec2cf606c184317a141cf69db"} Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146842 4842 scope.go:117] "RemoveContainer" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.146842 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.193153 4842 scope.go:117] "RemoveContainer" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.214440 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.220761 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.230354 4842 scope.go:117] "RemoveContainer" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.230764 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681\": container with ID starting with 54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681 not found: ID does not exist" containerID="54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.230793 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681"} err="failed to get container status \"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681\": rpc error: code = NotFound desc = could not find container \"54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681\": container with ID starting with 54284a46ac09d894f4ded8d4490b29e31ca3f5c624e7f4069d128d4f574ec681 not found: ID does not exist" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.230813 4842 scope.go:117] "RemoveContainer" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.231630 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b\": container with ID starting with 2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b not found: ID does not exist" containerID="2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.231662 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b"} err="failed to get container status \"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b\": rpc error: code = NotFound desc = could not find container \"2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b\": container with ID starting with 2c8ee50e4f65881fd7304ba6c36f7a3d6a7b1ea6446992c1865f5077f7b9fd3b not found: ID does not exist" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.235826 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236315 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236333 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236350 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236356 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236394 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236402 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" Feb 02 07:06:30 crc kubenswrapper[4842]: E0202 07:06:30.236415 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="init" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236423 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="init" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236654 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="cinder-scheduler" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236670 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="595bc2a4-f0b8-4930-8c66-3b3da4cc4630" containerName="dnsmasq-dns" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.236680 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" containerName="probe" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.241706 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.251030 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.252367 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345150 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345249 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345398 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345468 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345538 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.345781 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.447858 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.447935 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.447967 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.448034 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.448073 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.448114 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.449150 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.453341 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.453547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.454201 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.454619 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.468898 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"cinder-scheduler-0\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " pod="openstack/cinder-scheduler-0" Feb 02 07:06:30 crc kubenswrapper[4842]: I0202 07:06:30.567937 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:06:31 crc kubenswrapper[4842]: W0202 07:06:31.032027 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115a51a9_6125_46e1_a960_a66cb9957d38.slice/crio-d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28 WatchSource:0}: Error finding container d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28: Status 404 returned error can't find the container with id d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28 Feb 02 07:06:31 crc kubenswrapper[4842]: I0202 07:06:31.042661 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:06:31 crc kubenswrapper[4842]: I0202 07:06:31.157431 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerStarted","Data":"d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28"} Feb 02 07:06:31 crc kubenswrapper[4842]: I0202 07:06:31.444360 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d737380b-08d3-455f-a9a7-080d76cabc9f" path="/var/lib/kubelet/pods/d737380b-08d3-455f-a9a7-080d76cabc9f/volumes" Feb 02 07:06:32 crc kubenswrapper[4842]: I0202 07:06:32.152720 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 02 07:06:32 crc kubenswrapper[4842]: I0202 07:06:32.197469 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerStarted","Data":"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec"} Feb 02 07:06:33 crc kubenswrapper[4842]: I0202 07:06:33.207807 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerStarted","Data":"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf"} Feb 02 07:06:33 crc kubenswrapper[4842]: I0202 07:06:33.231670 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.231653235 podStartE2EDuration="3.231653235s" podCreationTimestamp="2026-02-02 07:06:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:33.224679433 +0000 UTC m=+1218.601947355" watchObservedRunningTime="2026-02-02 07:06:33.231653235 +0000 UTC m=+1218.608921147" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.105618 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.113039 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.367460 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.451330 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.679046 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:06:34 crc kubenswrapper[4842]: I0202 07:06:34.746069 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:35 crc kubenswrapper[4842]: I0202 07:06:35.225585 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-697d496d6b-bz7zg" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" containerID="cri-o://dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" gracePeriod=30 Feb 02 07:06:35 crc kubenswrapper[4842]: I0202 07:06:35.225685 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-697d496d6b-bz7zg" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" containerID="cri-o://82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" gracePeriod=30 Feb 02 07:06:35 crc kubenswrapper[4842]: I0202 07:06:35.568546 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 02 07:06:36 crc kubenswrapper[4842]: I0202 07:06:36.239521 4842 generic.go:334] "Generic (PLEG): container finished" podID="726c1772-2536-414e-a6ce-9c1437b021d1" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" exitCode=143 Feb 02 07:06:36 crc kubenswrapper[4842]: I0202 07:06:36.239630 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerDied","Data":"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe"} Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.772720 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.773665 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" containerID="cri-o://0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.773794 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" containerID="cri-o://9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.773832 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" containerID="cri-o://65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.774010 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" containerID="cri-o://80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c" gracePeriod=30 Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.829318 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.843608 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847381 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847590 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847607 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.847695 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907674 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907735 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907757 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.907796 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908060 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908209 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908315 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.908408 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:38 crc kubenswrapper[4842]: I0202 07:06:38.932051 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009490 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009579 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009698 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009746 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009784 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009861 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.009905 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") pod \"726c1772-2536-414e-a6ce-9c1437b021d1\" (UID: \"726c1772-2536-414e-a6ce-9c1437b021d1\") " Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010160 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010254 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010304 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010406 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010452 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010483 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010521 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.010568 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.011963 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.012793 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.016839 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs" (OuterVolumeSpecName: "logs") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.017382 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.019000 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm" (OuterVolumeSpecName: "kube-api-access-42brm") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "kube-api-access-42brm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.019897 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.020513 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.023409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts" (OuterVolumeSpecName: "scripts") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.024069 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.025774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.032850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"swift-proxy-659598d599-lpzh5\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079085 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.079647 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079677 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.079708 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079716 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079942 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-api" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.079969 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" containerName="placement-log" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.080582 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.083957 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-rzqpc" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.084980 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.085135 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.088970 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112103 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112506 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112527 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112600 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112656 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112674 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/726c1772-2536-414e-a6ce-9c1437b021d1-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.112685 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42brm\" (UniqueName: \"kubernetes.io/projected/726c1772-2536-414e-a6ce-9c1437b021d1-kube-api-access-42brm\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.122539 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data" (OuterVolumeSpecName: "config-data") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.124393 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.124490 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.137166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "726c1772-2536-414e-a6ce-9c1437b021d1" (UID: "726c1772-2536-414e-a6ce-9c1437b021d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.143915 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": read tcp 10.217.0.2:53090->10.217.0.167:3000: read: connection reset by peer" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214364 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214459 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214517 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214535 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214626 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214642 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214651 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.214659 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/726c1772-2536-414e-a6ce-9c1437b021d1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.215360 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.218870 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.218877 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.228793 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"openstackclient\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.255807 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264299 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-697d496d6b-bz7zg" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264303 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerDied","Data":"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264464 4842 scope.go:117] "RemoveContainer" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264312 4842 generic.go:334] "Generic (PLEG): container finished" podID="726c1772-2536-414e-a6ce-9c1437b021d1" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" exitCode=0 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.264548 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-697d496d6b-bz7zg" event={"ID":"726c1772-2536-414e-a6ce-9c1437b021d1","Type":"ContainerDied","Data":"3841fc7dcb9ce569457a802c09c27ff59529bd2560831414d8333da874fb2c77"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292822 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04" exitCode=0 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292850 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9" exitCode=2 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292860 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7" exitCode=0 Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292900 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.292909 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7"} Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.315736 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.323007 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-697d496d6b-bz7zg"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.419451 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.433075 4842 scope.go:117] "RemoveContainer" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.452069 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726c1772-2536-414e-a6ce-9c1437b021d1" path="/var/lib/kubelet/pods/726c1772-2536-414e-a6ce-9c1437b021d1/volumes" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.600158 4842 scope.go:117] "RemoveContainer" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.600978 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786\": container with ID starting with 82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786 not found: ID does not exist" containerID="82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.601013 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786"} err="failed to get container status \"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786\": rpc error: code = NotFound desc = could not find container \"82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786\": container with ID starting with 82a543d3d9cc00e4f8309fbaaed6e12bd0276e8a75a5a75d05dfd12644dff786 not found: ID does not exist" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.601038 4842 scope.go:117] "RemoveContainer" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" Feb 02 07:06:39 crc kubenswrapper[4842]: E0202 07:06:39.603599 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe\": container with ID starting with dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe not found: ID does not exist" containerID="dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.603629 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe"} err="failed to get container status \"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe\": rpc error: code = NotFound desc = could not find container \"dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe\": container with ID starting with dc6d91d0986b64e793e6b5ee027d9ab62f264d291e919b8d22ff5580bd033fbe not found: ID does not exist" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.806393 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.923491 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:06:39 crc kubenswrapper[4842]: I0202 07:06:39.932347 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.032892 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.032972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033056 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033166 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033260 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033297 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.033400 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") pod \"3aaab28f-fb61-4600-b66f-a485ca345112\" (UID: \"3aaab28f-fb61-4600-b66f-a485ca345112\") " Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.037465 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4" (OuterVolumeSpecName: "kube-api-access-4g4w4") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "kube-api-access-4g4w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.037735 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.102968 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.107147 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.109281 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config" (OuterVolumeSpecName: "config") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.114570 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.121521 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3aaab28f-fb61-4600-b66f-a485ca345112" (UID: "3aaab28f-fb61-4600-b66f-a485ca345112"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135544 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135870 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135880 4842 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135892 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g4w4\" (UniqueName: \"kubernetes.io/projected/3aaab28f-fb61-4600-b66f-a485ca345112-kube-api-access-4g4w4\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135901 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135910 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.135927 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aaab28f-fb61-4600-b66f-a485ca345112-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.303710 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"590d1088-e964-43a6-b879-01c8b83d4147","Type":"ContainerStarted","Data":"abd7b9a59e647cb412c034e625a72fdd9b5e8c874ae4e981bda1619d04a7aa91"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.305876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerStarted","Data":"49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.305901 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerStarted","Data":"1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.305912 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerStarted","Data":"c97160040d0350fa9bd5e1bbc3b5084d4e4f379ea92abc97f8017a5311a0c9cf"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.306043 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.306094 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.307996 4842 generic.go:334] "Generic (PLEG): container finished" podID="3aaab28f-fb61-4600-b66f-a485ca345112" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" exitCode=0 Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308039 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fcc587c45-x7h24" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308041 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerDied","Data":"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308069 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fcc587c45-x7h24" event={"ID":"3aaab28f-fb61-4600-b66f-a485ca345112","Type":"ContainerDied","Data":"6baf18e2465586bae82b31b897e8d4dfb75242a3b157fb93fe3a29ff487cbf1b"} Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.308088 4842 scope.go:117] "RemoveContainer" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.327772 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-659598d599-lpzh5" podStartSLOduration=2.327757175 podStartE2EDuration="2.327757175s" podCreationTimestamp="2026-02-02 07:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:40.326497714 +0000 UTC m=+1225.703765626" watchObservedRunningTime="2026-02-02 07:06:40.327757175 +0000 UTC m=+1225.705025087" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.343963 4842 scope.go:117] "RemoveContainer" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.349922 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.357487 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6fcc587c45-x7h24"] Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.363558 4842 scope.go:117] "RemoveContainer" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" Feb 02 07:06:40 crc kubenswrapper[4842]: E0202 07:06:40.364053 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775\": container with ID starting with ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775 not found: ID does not exist" containerID="ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.364086 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775"} err="failed to get container status \"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775\": rpc error: code = NotFound desc = could not find container \"ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775\": container with ID starting with ca6552ce5887f06f32bb03e339a3e9124e1fa65f5a80acb32717eb27f56d3775 not found: ID does not exist" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.364111 4842 scope.go:117] "RemoveContainer" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" Feb 02 07:06:40 crc kubenswrapper[4842]: E0202 07:06:40.364718 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6\": container with ID starting with b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6 not found: ID does not exist" containerID="b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.364756 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6"} err="failed to get container status \"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6\": rpc error: code = NotFound desc = could not find container \"b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6\": container with ID starting with b303529aa7f40b97ddac015c60fbc643d3194166e20eda9000a91d5e375c56d6 not found: ID does not exist" Feb 02 07:06:40 crc kubenswrapper[4842]: I0202 07:06:40.780729 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 02 07:06:41 crc kubenswrapper[4842]: I0202 07:06:41.476947 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" path="/var/lib/kubelet/pods/3aaab28f-fb61-4600-b66f-a485ca345112/volumes" Feb 02 07:06:42 crc kubenswrapper[4842]: I0202 07:06:42.146338 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:06:42 crc kubenswrapper[4842]: I0202 07:06:42.146616 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:06:44 crc kubenswrapper[4842]: I0202 07:06:44.362538 4842 generic.go:334] "Generic (PLEG): container finished" podID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerID="80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c" exitCode=0 Feb 02 07:06:44 crc kubenswrapper[4842]: I0202 07:06:44.362747 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c"} Feb 02 07:06:45 crc kubenswrapper[4842]: I0202 07:06:45.497929 4842 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podd9f1c72e-953b-45ba-ba69-c7574f82e8ad"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podd9f1c72e-953b-45ba-ba69-c7574f82e8ad] : Timed out while waiting for systemd to remove kubepods-besteffort-podd9f1c72e_953b_45ba_ba69_c7574f82e8ad.slice" Feb 02 07:06:45 crc kubenswrapper[4842]: E0202 07:06:45.497999 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podd9f1c72e-953b-45ba-ba69-c7574f82e8ad] : unable to destroy cgroup paths for cgroup [kubepods besteffort podd9f1c72e-953b-45ba-ba69-c7574f82e8ad] : Timed out while waiting for systemd to remove kubepods-besteffort-podd9f1c72e_953b_45ba_ba69_c7574f82e8ad.slice" pod="openstack/cinder-db-sync-phj68" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" Feb 02 07:06:46 crc kubenswrapper[4842]: I0202 07:06:46.386317 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-phj68" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.267714 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.275825 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.475245 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.476807 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0636bdd6-0d17-4f9b-9031-663dfb98f672","Type":"ContainerDied","Data":"2332347c0d70878870bc3cca3315995176808c8257ccc12723509cbb8433193f"} Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.476898 4842 scope.go:117] "RemoveContainer" containerID="9fd61c4357d65c3104ccc6627ce5c120ccaf3a3a092c30986f1996894ba11d04" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.506026 4842 scope.go:117] "RemoveContainer" containerID="65fe3e72ea38c1f2d2b6b3a6c420618912dad1d016bd4f786028a45d00817ad9" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.523915 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.523981 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.523997 4842 scope.go:117] "RemoveContainer" containerID="80e2b283fa7d6732f1ee502cb45ba016aee0bc6094fa574b3e9b062a5cb23a5c" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524098 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524189 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524253 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524276 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") pod \"0636bdd6-0d17-4f9b-9031-663dfb98f672\" (UID: \"0636bdd6-0d17-4f9b-9031-663dfb98f672\") " Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.524574 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.525016 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.525263 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.530159 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts" (OuterVolumeSpecName: "scripts") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.534119 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm" (OuterVolumeSpecName: "kube-api-access-hf6fm") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "kube-api-access-hf6fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.550962 4842 scope.go:117] "RemoveContainer" containerID="0275ebaf83cd1dc6f0f1e530a2520ae303911995fcb24e0ce6bb618355448ca7" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.565710 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.596113 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.624191 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data" (OuterVolumeSpecName: "config-data") pod "0636bdd6-0d17-4f9b-9031-663dfb98f672" (UID: "0636bdd6-0d17-4f9b-9031-663dfb98f672"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626404 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0636bdd6-0d17-4f9b-9031-663dfb98f672-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626431 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626441 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626449 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626459 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0636bdd6-0d17-4f9b-9031-663dfb98f672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:49 crc kubenswrapper[4842]: I0202 07:06:49.626468 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf6fm\" (UniqueName: \"kubernetes.io/projected/0636bdd6-0d17-4f9b-9031-663dfb98f672-kube-api-access-hf6fm\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.446912 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"590d1088-e964-43a6-b879-01c8b83d4147","Type":"ContainerStarted","Data":"7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316"} Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.447888 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.467897 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.123709148 podStartE2EDuration="11.467882114s" podCreationTimestamp="2026-02-02 07:06:39 +0000 UTC" firstStartedPulling="2026-02-02 07:06:39.958638122 +0000 UTC m=+1225.335906034" lastFinishedPulling="2026-02-02 07:06:49.302811088 +0000 UTC m=+1234.680079000" observedRunningTime="2026-02-02 07:06:50.465886664 +0000 UTC m=+1235.843154596" watchObservedRunningTime="2026-02-02 07:06:50.467882114 +0000 UTC m=+1235.845150026" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.485380 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.495296 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507117 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507461 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507475 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507486 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507491 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507503 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507509 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507524 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507531 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507548 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507553 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.507564 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507569 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507729 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-api" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507742 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aaab28f-fb61-4600-b66f-a485ca345112" containerName="neutron-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507759 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="proxy-httpd" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507768 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-notification-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507782 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="ceilometer-central-agent" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.507791 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" containerName="sg-core" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.510478 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.515900 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.518274 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.520137 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.652830 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.652934 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653000 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653054 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653625 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653691 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.653804 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.699488 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:50 crc kubenswrapper[4842]: E0202 07:06:50.700073 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-sghhj log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="ef57521a-a9fc-42b0-b641-1258e3bfdf34" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755195 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755248 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755275 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755404 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755454 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755777 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.755884 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.760838 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.762872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.767932 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.770739 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:50 crc kubenswrapper[4842]: I0202 07:06:50.782700 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ceilometer-0\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " pod="openstack/ceilometer-0" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.445083 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0636bdd6-0d17-4f9b-9031-663dfb98f672" path="/var/lib/kubelet/pods/0636bdd6-0d17-4f9b-9031-663dfb98f672/volumes" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.456380 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.470180 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669627 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669764 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.669884 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670074 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670247 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670364 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") pod \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\" (UID: \"ef57521a-a9fc-42b0-b641-1258e3bfdf34\") " Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670459 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670854 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.670942 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.674096 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.675861 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj" (OuterVolumeSpecName: "kube-api-access-sghhj") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "kube-api-access-sghhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.676183 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts" (OuterVolumeSpecName: "scripts") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.677437 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.679362 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data" (OuterVolumeSpecName: "config-data") pod "ef57521a-a9fc-42b0-b641-1258e3bfdf34" (UID: "ef57521a-a9fc-42b0-b641-1258e3bfdf34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771892 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef57521a-a9fc-42b0-b641-1258e3bfdf34-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771935 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771945 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771958 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sghhj\" (UniqueName: \"kubernetes.io/projected/ef57521a-a9fc-42b0-b641-1258e3bfdf34-kube-api-access-sghhj\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771966 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:51 crc kubenswrapper[4842]: I0202 07:06:51.771975 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef57521a-a9fc-42b0-b641-1258e3bfdf34-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.464815 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.554921 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.571654 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.581833 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.584753 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.586600 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588455 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588572 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588606 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588652 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588684 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588720 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.588818 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.597687 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690102 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690141 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690230 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690278 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690304 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690339 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690374 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690648 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.690913 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.697384 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.697579 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.697941 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.707062 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.712118 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"ceilometer-0\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " pod="openstack/ceilometer-0" Feb 02 07:06:52 crc kubenswrapper[4842]: I0202 07:06:52.911142 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.347580 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.443651 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef57521a-a9fc-42b0-b641-1258e3bfdf34" path="/var/lib/kubelet/pods/ef57521a-a9fc-42b0-b641-1258e3bfdf34/volumes" Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.476453 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"610ef45c658d7af4f1bfccb5ab1bcf0f7f84312f0fd214a19b9a637d039efaf5"} Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.905615 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.969323 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.969544 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b469b995b-npwfd" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" containerID="cri-o://6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9" gracePeriod=30 Feb 02 07:06:53 crc kubenswrapper[4842]: I0202 07:06:53.969823 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b469b995b-npwfd" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" containerID="cri-o://f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38" gracePeriod=30 Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.377046 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.495835 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599"} Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.498106 4842 generic.go:334] "Generic (PLEG): container finished" podID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerID="f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38" exitCode=0 Feb 02 07:06:54 crc kubenswrapper[4842]: I0202 07:06:54.498145 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerDied","Data":"f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38"} Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.533071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd"} Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.579894 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.580903 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.595918 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.656731 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.657444 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.690397 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.691534 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.693810 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.698561 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.699489 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.708642 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.726339 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759746 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759794 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759818 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759862 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759900 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.759923 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.760647 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.786122 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"nova-api-db-create-dg9pd\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864669 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864719 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864780 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.864828 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.865552 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.865555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.882662 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.885312 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"nova-api-89ff-account-create-update-pb4bw\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.885461 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.896716 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"nova-cell0-db-create-jph4l\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.902833 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.903951 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.904466 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.908619 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.938608 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.964275 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966435 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966490 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966528 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:55 crc kubenswrapper[4842]: I0202 07:06:55.966590 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.054856 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.060650 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.068838 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.068945 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.068974 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.069005 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.070571 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.072995 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.096281 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.097605 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.101791 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"nova-cell0-7f00-account-create-update-llc96\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.103380 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.109819 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"nova-cell1-db-create-79v8r\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.130385 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.171332 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.171469 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.272767 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.272839 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.274956 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.293021 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"nova-cell1-17c9-account-create-update-hm58m\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.310693 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.319137 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.421923 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.557112 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dg9pd" event={"ID":"4b414999-f3d0-4101-abe7-ed8c7747ce5f","Type":"ContainerStarted","Data":"94ef265414e26a0b5006140a913a8d7ff6850122bee0165ff7e8ae90e61983f0"} Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.568494 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd"} Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.571379 4842 generic.go:334] "Generic (PLEG): container finished" podID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerID="6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9" exitCode=0 Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.571407 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerDied","Data":"6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9"} Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.586060 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.632052 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.726734 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795336 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795404 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795468 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795493 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.795626 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") pod \"a18aba57-b830-47d3-9b18-8946414fdd1d\" (UID: \"a18aba57-b830-47d3-9b18-8946414fdd1d\") " Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.803400 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.817078 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.819583 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482" (OuterVolumeSpecName: "kube-api-access-2b482") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "kube-api-access-2b482". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: W0202 07:06:56.820953 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8715fd_8755_4bd6_82a7_bf49d61e1779.slice/crio-0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6 WatchSource:0}: Error finding container 0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6: Status 404 returned error can't find the container with id 0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6 Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.899447 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.899481 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b482\" (UniqueName: \"kubernetes.io/projected/a18aba57-b830-47d3-9b18-8946414fdd1d-kube-api-access-2b482\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.932390 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.935438 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.958116 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config" (OuterVolumeSpecName: "config") pod "a18aba57-b830-47d3-9b18-8946414fdd1d" (UID: "a18aba57-b830-47d3-9b18-8946414fdd1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.969035 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:06:56 crc kubenswrapper[4842]: I0202 07:06:56.985811 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:06:56 crc kubenswrapper[4842]: W0202 07:06:56.991121 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod939ed5f9_679d_44c4_8282_d1404d98b420.slice/crio-41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b WatchSource:0}: Error finding container 41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b: Status 404 returned error can't find the container with id 41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b Feb 02 07:06:56 crc kubenswrapper[4842]: W0202 07:06:56.994196 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod668f221e_e491_4ec6_9f40_82dd1afc3ac8.slice/crio-2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc WatchSource:0}: Error finding container 2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc: Status 404 returned error can't find the container with id 2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.003152 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.003187 4842 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.003197 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18aba57-b830-47d3-9b18-8946414fdd1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.228071 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:06:57 crc kubenswrapper[4842]: W0202 07:06:57.259999 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9d15d01_9c12_4b4f_9cec_037a1d21fab1.slice/crio-9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581 WatchSource:0}: Error finding container 9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581: Status 404 returned error can't find the container with id 9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.580536 4842 generic.go:334] "Generic (PLEG): container finished" podID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerID="adafd15daec92386baa24cf42bc0363f97b26ac9307e8e8272e537e2c7e8b2cf" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.580581 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jph4l" event={"ID":"2d8715fd-8755-4bd6-82a7-bf49d61e1779","Type":"ContainerDied","Data":"adafd15daec92386baa24cf42bc0363f97b26ac9307e8e8272e537e2c7e8b2cf"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.580629 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jph4l" event={"ID":"2d8715fd-8755-4bd6-82a7-bf49d61e1779","Type":"ContainerStarted","Data":"0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.582636 4842 generic.go:334] "Generic (PLEG): container finished" podID="52bba199-2794-4828-9a54-e1aac49fb223" containerID="7b7d5e5edb2af232c2055e5da49c69d329f4113726a849604a2b594aefa2f3af" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.582702 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-pb4bw" event={"ID":"52bba199-2794-4828-9a54-e1aac49fb223","Type":"ContainerDied","Data":"7b7d5e5edb2af232c2055e5da49c69d329f4113726a849604a2b594aefa2f3af"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.582725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-pb4bw" event={"ID":"52bba199-2794-4828-9a54-e1aac49fb223","Type":"ContainerStarted","Data":"b9b079e5b40935f5c3957e2ff08d97c88f3c365e78a54eba6ef83e9680d55e18"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.584234 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerStarted","Data":"e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.584256 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerStarted","Data":"9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.585747 4842 generic.go:334] "Generic (PLEG): container finished" podID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerID="ca50f3bd514767840a56ccfe9f58d3e7f3e73682b97d7191a9419836cd607b01" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.585776 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-llc96" event={"ID":"668f221e-e491-4ec6-9f40-82dd1afc3ac8","Type":"ContainerDied","Data":"ca50f3bd514767840a56ccfe9f58d3e7f3e73682b97d7191a9419836cd607b01"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.585802 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-llc96" event={"ID":"668f221e-e491-4ec6-9f40-82dd1afc3ac8","Type":"ContainerStarted","Data":"2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.587240 4842 generic.go:334] "Generic (PLEG): container finished" podID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerID="b2f7cb4727d9784f10ff6a0c8a30a31bb44be887023eca0a860978903f19daa6" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.587262 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dg9pd" event={"ID":"4b414999-f3d0-4101-abe7-ed8c7747ce5f","Type":"ContainerDied","Data":"b2f7cb4727d9784f10ff6a0c8a30a31bb44be887023eca0a860978903f19daa6"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.588613 4842 generic.go:334] "Generic (PLEG): container finished" podID="939ed5f9-679d-44c4-8282-d1404d98b420" containerID="2c7088cf1821b77c6f7eefcfe1152002a124d024b112d220292c3bfdaf924d4c" exitCode=0 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.588656 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79v8r" event={"ID":"939ed5f9-679d-44c4-8282-d1404d98b420","Type":"ContainerDied","Data":"2c7088cf1821b77c6f7eefcfe1152002a124d024b112d220292c3bfdaf924d4c"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.588672 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79v8r" event={"ID":"939ed5f9-679d-44c4-8282-d1404d98b420","Type":"ContainerStarted","Data":"41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.590765 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b469b995b-npwfd" event={"ID":"a18aba57-b830-47d3-9b18-8946414fdd1d","Type":"ContainerDied","Data":"c685a8dc8410d6a7a79b5205dd3ff23339631326844f2a5b84578d841706238e"} Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.590827 4842 scope.go:117] "RemoveContainer" containerID="f8f9e0a8b64ae08b996a6ff20de4cb61c2fe7c362caaa42c329de676a9077b38" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.590781 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b469b995b-npwfd" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.619433 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" podStartSLOduration=1.619414441 podStartE2EDuration="1.619414441s" podCreationTimestamp="2026-02-02 07:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:06:57.613473314 +0000 UTC m=+1242.990741226" watchObservedRunningTime="2026-02-02 07:06:57.619414441 +0000 UTC m=+1242.996682353" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.731566 4842 scope.go:117] "RemoveContainer" containerID="6747e535436e2bdd0c46d5273f8b5a7d29b3c3f7226e94896a48a5bfcdb6a2d9" Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.733419 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.741712 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b469b995b-npwfd"] Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.831414 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.831651 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" containerID="cri-o://5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" gracePeriod=30 Feb 02 07:06:57 crc kubenswrapper[4842]: I0202 07:06:57.831763 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" containerID="cri-o://8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.605621 4842 generic.go:334] "Generic (PLEG): container finished" podID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerID="e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867" exitCode=0 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.605681 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerDied","Data":"e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867"} Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612753 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerStarted","Data":"23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009"} Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612870 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" containerID="cri-o://022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612895 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.612955 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" containerID="cri-o://23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.613022 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" containerID="cri-o://36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.613110 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" containerID="cri-o://3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599" gracePeriod=30 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.628022 4842 generic.go:334] "Generic (PLEG): container finished" podID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" exitCode=143 Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.628101 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerDied","Data":"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce"} Feb 02 07:06:58 crc kubenswrapper[4842]: I0202 07:06:58.981246 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.001172 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.085177527 podStartE2EDuration="7.00115542s" podCreationTimestamp="2026-02-02 07:06:52 +0000 UTC" firstStartedPulling="2026-02-02 07:06:53.354104836 +0000 UTC m=+1238.731372748" lastFinishedPulling="2026-02-02 07:06:58.270082729 +0000 UTC m=+1243.647350641" observedRunningTime="2026-02-02 07:06:58.648974216 +0000 UTC m=+1244.026242158" watchObservedRunningTime="2026-02-02 07:06:59.00115542 +0000 UTC m=+1244.378423332" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.044787 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") pod \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.044852 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") pod \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\" (UID: \"668f221e-e491-4ec6-9f40-82dd1afc3ac8\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.046028 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "668f221e-e491-4ec6-9f40-82dd1afc3ac8" (UID: "668f221e-e491-4ec6-9f40-82dd1afc3ac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.058607 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr" (OuterVolumeSpecName: "kube-api-access-28qpr") pod "668f221e-e491-4ec6-9f40-82dd1afc3ac8" (UID: "668f221e-e491-4ec6-9f40-82dd1afc3ac8"). InnerVolumeSpecName "kube-api-access-28qpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.147204 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28qpr\" (UniqueName: \"kubernetes.io/projected/668f221e-e491-4ec6-9f40-82dd1afc3ac8-kube-api-access-28qpr\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.147242 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/668f221e-e491-4ec6-9f40-82dd1afc3ac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.212853 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.218234 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.226348 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.235409 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250082 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") pod \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250168 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") pod \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") pod \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\" (UID: \"2d8715fd-8755-4bd6-82a7-bf49d61e1779\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.250286 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") pod \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\" (UID: \"4b414999-f3d0-4101-abe7-ed8c7747ce5f\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.254933 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4b414999-f3d0-4101-abe7-ed8c7747ce5f" (UID: "4b414999-f3d0-4101-abe7-ed8c7747ce5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.255308 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d8715fd-8755-4bd6-82a7-bf49d61e1779" (UID: "2d8715fd-8755-4bd6-82a7-bf49d61e1779"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.256682 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k" (OuterVolumeSpecName: "kube-api-access-t5k4k") pod "4b414999-f3d0-4101-abe7-ed8c7747ce5f" (UID: "4b414999-f3d0-4101-abe7-ed8c7747ce5f"). InnerVolumeSpecName "kube-api-access-t5k4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.287571 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr" (OuterVolumeSpecName: "kube-api-access-zpvzr") pod "2d8715fd-8755-4bd6-82a7-bf49d61e1779" (UID: "2d8715fd-8755-4bd6-82a7-bf49d61e1779"). InnerVolumeSpecName "kube-api-access-zpvzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352645 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") pod \"52bba199-2794-4828-9a54-e1aac49fb223\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352810 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") pod \"939ed5f9-679d-44c4-8282-d1404d98b420\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352871 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") pod \"52bba199-2794-4828-9a54-e1aac49fb223\" (UID: \"52bba199-2794-4828-9a54-e1aac49fb223\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.352895 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") pod \"939ed5f9-679d-44c4-8282-d1404d98b420\" (UID: \"939ed5f9-679d-44c4-8282-d1404d98b420\") " Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353200 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "939ed5f9-679d-44c4-8282-d1404d98b420" (UID: "939ed5f9-679d-44c4-8282-d1404d98b420"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353309 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52bba199-2794-4828-9a54-e1aac49fb223" (UID: "52bba199-2794-4828-9a54-e1aac49fb223"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353936 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/939ed5f9-679d-44c4-8282-d1404d98b420-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353953 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bba199-2794-4828-9a54-e1aac49fb223-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353962 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d8715fd-8755-4bd6-82a7-bf49d61e1779-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353971 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4b414999-f3d0-4101-abe7-ed8c7747ce5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353980 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpvzr\" (UniqueName: \"kubernetes.io/projected/2d8715fd-8755-4bd6-82a7-bf49d61e1779-kube-api-access-zpvzr\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.353990 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5k4k\" (UniqueName: \"kubernetes.io/projected/4b414999-f3d0-4101-abe7-ed8c7747ce5f-kube-api-access-t5k4k\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.355877 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj" (OuterVolumeSpecName: "kube-api-access-mq9fj") pod "52bba199-2794-4828-9a54-e1aac49fb223" (UID: "52bba199-2794-4828-9a54-e1aac49fb223"). InnerVolumeSpecName "kube-api-access-mq9fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.356491 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m" (OuterVolumeSpecName: "kube-api-access-m4x9m") pod "939ed5f9-679d-44c4-8282-d1404d98b420" (UID: "939ed5f9-679d-44c4-8282-d1404d98b420"). InnerVolumeSpecName "kube-api-access-m4x9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.450011 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" path="/var/lib/kubelet/pods/a18aba57-b830-47d3-9b18-8946414fdd1d/volumes" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.455810 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4x9m\" (UniqueName: \"kubernetes.io/projected/939ed5f9-679d-44c4-8282-d1404d98b420-kube-api-access-m4x9m\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.455861 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq9fj\" (UniqueName: \"kubernetes.io/projected/52bba199-2794-4828-9a54-e1aac49fb223-kube-api-access-mq9fj\") on node \"crc\" DevicePath \"\"" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.641790 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dg9pd" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.641791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dg9pd" event={"ID":"4b414999-f3d0-4101-abe7-ed8c7747ce5f","Type":"ContainerDied","Data":"94ef265414e26a0b5006140a913a8d7ff6850122bee0165ff7e8ae90e61983f0"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.642006 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94ef265414e26a0b5006140a913a8d7ff6850122bee0165ff7e8ae90e61983f0" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.645823 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd" exitCode=2 Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.645865 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd" exitCode=0 Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.645982 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.646075 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.649071 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-79v8r" event={"ID":"939ed5f9-679d-44c4-8282-d1404d98b420","Type":"ContainerDied","Data":"41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.649130 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41a971de04948c9e44ce6dc40b3d77bba6e4a0cb17a05ba55bfb243374f2d86b" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.649129 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-79v8r" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.652951 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jph4l" event={"ID":"2d8715fd-8755-4bd6-82a7-bf49d61e1779","Type":"ContainerDied","Data":"0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.652990 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fde92f3b8f0ad9269fdb9699eb52b5f22ca179532eaf6391fcded5cb29f2ba6" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.653005 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jph4l" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.655292 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-pb4bw" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.655323 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-pb4bw" event={"ID":"52bba199-2794-4828-9a54-e1aac49fb223","Type":"ContainerDied","Data":"b9b079e5b40935f5c3957e2ff08d97c88f3c365e78a54eba6ef83e9680d55e18"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.655360 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b079e5b40935f5c3957e2ff08d97c88f3c365e78a54eba6ef83e9680d55e18" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.657370 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-llc96" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.657935 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-llc96" event={"ID":"668f221e-e491-4ec6-9f40-82dd1afc3ac8","Type":"ContainerDied","Data":"2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc"} Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.657964 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c40c85d611d7d09a42f24bc8981993f6fd753b9a53c230d0563bacda87102bc" Feb 02 07:06:59 crc kubenswrapper[4842]: I0202 07:06:59.976265 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.064986 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") pod \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.065117 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") pod \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\" (UID: \"a9d15d01-9c12-4b4f-9cec-037a1d21fab1\") " Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.065860 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9d15d01-9c12-4b4f-9cec-037a1d21fab1" (UID: "a9d15d01-9c12-4b4f-9cec-037a1d21fab1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.071797 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp" (OuterVolumeSpecName: "kube-api-access-6k8mp") pod "a9d15d01-9c12-4b4f-9cec-037a1d21fab1" (UID: "a9d15d01-9c12-4b4f-9cec-037a1d21fab1"). InnerVolumeSpecName "kube-api-access-6k8mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.167402 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.167448 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8mp\" (UniqueName: \"kubernetes.io/projected/a9d15d01-9c12-4b4f-9cec-037a1d21fab1-kube-api-access-6k8mp\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.461843 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.462196 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" containerID="cri-o://224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3" gracePeriod=30 Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.462551 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" containerID="cri-o://17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541" gracePeriod=30 Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.665115 4842 generic.go:334] "Generic (PLEG): container finished" podID="74fb1197-2202-4b15-a858-05dd736a1a26" containerID="17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541" exitCode=143 Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.665167 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerDied","Data":"17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541"} Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.666573 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" event={"ID":"a9d15d01-9c12-4b4f-9cec-037a1d21fab1","Type":"ContainerDied","Data":"9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581"} Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.666605 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccc4349841c5450246f1eb65b4db6e6964dabbd241a9da4c8ab5313470a2581" Feb 02 07:07:00 crc kubenswrapper[4842]: I0202 07:07:00.666699 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-hm58m" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.190474 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191344 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191367 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191385 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191410 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191434 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191442 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191459 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bba199-2794-4828-9a54-e1aac49fb223" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191467 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bba199-2794-4828-9a54-e1aac49fb223" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191490 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191498 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191511 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191519 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191541 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191549 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.191570 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191578 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191783 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-httpd" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191798 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bba199-2794-4828-9a54-e1aac49fb223" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191808 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191818 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191832 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18aba57-b830-47d3-9b18-8946414fdd1d" containerName="neutron-api" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191848 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191863 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" containerName="mariadb-account-create-update" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.191883 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" containerName="mariadb-database-create" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.192633 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.196324 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.196619 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.196791 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zt7nb" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.203844 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.286943 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.287003 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.287100 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.287299 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389107 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389165 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389242 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.389343 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.395248 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.395833 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.399699 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.418268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"nova-cell0-conductor-db-sync-6htfz\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.517798 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.551040 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591845 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591906 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591938 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.591961 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592421 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592452 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592601 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.592662 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\" (UID: \"09febcea-8bf3-43b8-b6ff-ae8a0e445519\") " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.597704 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts" (OuterVolumeSpecName: "scripts") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.598609 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.599025 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs" (OuterVolumeSpecName: "logs") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.600770 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.602402 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95" (OuterVolumeSpecName: "kube-api-access-m7x95") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "kube-api-access-m7x95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.625352 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.662741 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data" (OuterVolumeSpecName: "config-data") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685839 4842 generic.go:334] "Generic (PLEG): container finished" podID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" exitCode=0 Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685920 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerDied","Data":"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8"} Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685947 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"09febcea-8bf3-43b8-b6ff-ae8a0e445519","Type":"ContainerDied","Data":"a5ef0c57463087c53e29eaaeb479b34c51cb5e6f894ab3af4029762d8f230dca"} Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.685970 4842 scope.go:117] "RemoveContainer" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.686086 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.692160 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09febcea-8bf3-43b8-b6ff-ae8a0e445519" (UID: "09febcea-8bf3-43b8-b6ff-ae8a0e445519"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696278 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7x95\" (UniqueName: \"kubernetes.io/projected/09febcea-8bf3-43b8-b6ff-ae8a0e445519-kube-api-access-m7x95\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696301 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696318 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696327 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696336 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09febcea-8bf3-43b8-b6ff-ae8a0e445519-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696345 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696353 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09febcea-8bf3-43b8-b6ff-ae8a0e445519-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.696379 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.723258 4842 scope.go:117] "RemoveContainer" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.731082 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.744894 4842 scope.go:117] "RemoveContainer" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.745322 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8\": container with ID starting with 8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8 not found: ID does not exist" containerID="8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.745361 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8"} err="failed to get container status \"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8\": rpc error: code = NotFound desc = could not find container \"8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8\": container with ID starting with 8d3926fc2f7172c658b9b2069d4954fc955daf88fa215cbbf56fe1879ccec1b8 not found: ID does not exist" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.745407 4842 scope.go:117] "RemoveContainer" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" Feb 02 07:07:01 crc kubenswrapper[4842]: E0202 07:07:01.745862 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce\": container with ID starting with 5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce not found: ID does not exist" containerID="5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.745914 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce"} err="failed to get container status \"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce\": rpc error: code = NotFound desc = could not find container \"5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce\": container with ID starting with 5ef15884271c02db7ac2aacfcafc7eda559d7d1e5207b1cc74589dab6d9494ce not found: ID does not exist" Feb 02 07:07:01 crc kubenswrapper[4842]: I0202 07:07:01.799371 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.006402 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.028354 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.050414 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.067562 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: E0202 07:07:02.068124 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068153 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" Feb 02 07:07:02 crc kubenswrapper[4842]: E0202 07:07:02.068178 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068184 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068805 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-log" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.068860 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" containerName="glance-httpd" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.069939 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.072852 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.073014 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.081131 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104112 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104154 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104208 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104245 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104287 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104305 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104327 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.104351 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.205867 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.205932 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.205955 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206008 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206029 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206091 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206111 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206134 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.206429 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.207384 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.207622 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.211348 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.212525 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.213602 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.214085 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.228864 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.236672 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.413075 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.705982 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerStarted","Data":"90c87f6f53b22b92a4b5061d88a8063f32c54f968d8334ec9cca4c935c7373bc"} Feb 02 07:07:02 crc kubenswrapper[4842]: I0202 07:07:02.763756 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:07:02 crc kubenswrapper[4842]: W0202 07:07:02.768526 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34f55116_a518_4f21_8816_6f8232a6f68d.slice/crio-03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1 WatchSource:0}: Error finding container 03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1: Status 404 returned error can't find the container with id 03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1 Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.448361 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09febcea-8bf3-43b8-b6ff-ae8a0e445519" path="/var/lib/kubelet/pods/09febcea-8bf3-43b8-b6ff-ae8a0e445519/volumes" Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.726649 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerStarted","Data":"c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e"} Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.726687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerStarted","Data":"03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1"} Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.734700 4842 generic.go:334] "Generic (PLEG): container finished" podID="74fb1197-2202-4b15-a858-05dd736a1a26" containerID="224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3" exitCode=0 Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.734743 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerDied","Data":"224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3"} Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.737493 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599" exitCode=0 Feb 02 07:07:03 crc kubenswrapper[4842]: I0202 07:07:03.737521 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599"} Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.030058 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136617 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136739 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136801 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136831 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136904 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136945 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.136973 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.137045 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") pod \"74fb1197-2202-4b15-a858-05dd736a1a26\" (UID: \"74fb1197-2202-4b15-a858-05dd736a1a26\") " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.144539 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts" (OuterVolumeSpecName: "scripts") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.146104 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs" (OuterVolumeSpecName: "logs") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.146324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.147406 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.149417 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t" (OuterVolumeSpecName: "kube-api-access-9sx9t") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "kube-api-access-9sx9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.175751 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.213375 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data" (OuterVolumeSpecName: "config-data") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.236684 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "74fb1197-2202-4b15-a858-05dd736a1a26" (UID: "74fb1197-2202-4b15-a858-05dd736a1a26"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239381 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sx9t\" (UniqueName: \"kubernetes.io/projected/74fb1197-2202-4b15-a858-05dd736a1a26-kube-api-access-9sx9t\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239453 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239466 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239475 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239487 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239496 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/74fb1197-2202-4b15-a858-05dd736a1a26-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239503 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.239512 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74fb1197-2202-4b15-a858-05dd736a1a26-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.257687 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.341379 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.772991 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerStarted","Data":"72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596"} Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.775472 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"74fb1197-2202-4b15-a858-05dd736a1a26","Type":"ContainerDied","Data":"c3a9d9eee3d9319f1e0b533f2cb62666947fc026870c7a05529e2c7e13ac265d"} Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.775534 4842 scope.go:117] "RemoveContainer" containerID="224fc5852a577215a4a41f26622ee8290bb52c1f1f725cc252747f84a03552e3" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.775637 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.809315 4842 scope.go:117] "RemoveContainer" containerID="17b5094d456c9e7ac0aee7bc704529e5e3cdad0cd41064b1ee27f8f438f68541" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.809961 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.809941125 podStartE2EDuration="2.809941125s" podCreationTimestamp="2026-02-02 07:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:04.793931368 +0000 UTC m=+1250.171199280" watchObservedRunningTime="2026-02-02 07:07:04.809941125 +0000 UTC m=+1250.187209057" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.858245 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.864024 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.891777 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:04 crc kubenswrapper[4842]: E0202 07:07:04.892259 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892280 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" Feb 02 07:07:04 crc kubenswrapper[4842]: E0202 07:07:04.892303 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892575 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-httpd" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.892600 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" containerName="glance-log" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.893711 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.896826 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.896842 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 02 07:07:04 crc kubenswrapper[4842]: I0202 07:07:04.921661 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069050 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069114 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069337 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069465 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069594 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069637 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069788 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.069851 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172058 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172128 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172204 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172254 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172279 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172315 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172365 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172392 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172589 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172645 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.172656 4842 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.176941 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.177292 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.177383 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.177580 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.200300 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.203599 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.214945 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.445430 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74fb1197-2202-4b15-a858-05dd736a1a26" path="/var/lib/kubelet/pods/74fb1197-2202-4b15-a858-05dd736a1a26/volumes" Feb 02 07:07:05 crc kubenswrapper[4842]: I0202 07:07:05.746887 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:07:09 crc kubenswrapper[4842]: I0202 07:07:09.825570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerStarted","Data":"1eecf23079bd634775107b900580aa4bb87379a656bc114e56acf8d85609c009"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.838277 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerStarted","Data":"50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.838747 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerStarted","Data":"baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.846419 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerStarted","Data":"f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611"} Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.867441 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.867422218 podStartE2EDuration="6.867422218s" podCreationTimestamp="2026-02-02 07:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:10.860764443 +0000 UTC m=+1256.238032365" watchObservedRunningTime="2026-02-02 07:07:10.867422218 +0000 UTC m=+1256.244690150" Feb 02 07:07:10 crc kubenswrapper[4842]: I0202 07:07:10.888101 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6htfz" podStartSLOduration=2.169806867 podStartE2EDuration="9.88808215s" podCreationTimestamp="2026-02-02 07:07:01 +0000 UTC" firstStartedPulling="2026-02-02 07:07:01.996911099 +0000 UTC m=+1247.374179011" lastFinishedPulling="2026-02-02 07:07:09.715186352 +0000 UTC m=+1255.092454294" observedRunningTime="2026-02-02 07:07:10.886358137 +0000 UTC m=+1256.263626059" watchObservedRunningTime="2026-02-02 07:07:10.88808215 +0000 UTC m=+1256.265350062" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.146762 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.147289 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.413849 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.413917 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.476326 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.493376 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.869880 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:07:12 crc kubenswrapper[4842]: I0202 07:07:12.869927 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 02 07:07:14 crc kubenswrapper[4842]: I0202 07:07:14.689548 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:07:14 crc kubenswrapper[4842]: I0202 07:07:14.699862 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.215909 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.215984 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.267868 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.278979 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.898863 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:15 crc kubenswrapper[4842]: I0202 07:07:15.899100 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:17 crc kubenswrapper[4842]: I0202 07:07:17.672894 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:17 crc kubenswrapper[4842]: I0202 07:07:17.674035 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 02 07:07:19 crc kubenswrapper[4842]: I0202 07:07:19.952583 4842 generic.go:334] "Generic (PLEG): container finished" podID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerID="f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611" exitCode=0 Feb 02 07:07:19 crc kubenswrapper[4842]: I0202 07:07:19.952615 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerDied","Data":"f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611"} Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.423805 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.575881 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.576019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.576271 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.576340 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") pod \"fb013bc6-805e-43d5-95f8-98597c33fa9e\" (UID: \"fb013bc6-805e-43d5-95f8-98597c33fa9e\") " Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.582975 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts" (OuterVolumeSpecName: "scripts") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.584355 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l" (OuterVolumeSpecName: "kube-api-access-mcz9l") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "kube-api-access-mcz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.631206 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.632733 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data" (OuterVolumeSpecName: "config-data") pod "fb013bc6-805e-43d5-95f8-98597c33fa9e" (UID: "fb013bc6-805e-43d5-95f8-98597c33fa9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679793 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679839 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679859 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb013bc6-805e-43d5-95f8-98597c33fa9e-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.679878 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcz9l\" (UniqueName: \"kubernetes.io/projected/fb013bc6-805e-43d5-95f8-98597c33fa9e-kube-api-access-mcz9l\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.986433 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6htfz" event={"ID":"fb013bc6-805e-43d5-95f8-98597c33fa9e","Type":"ContainerDied","Data":"90c87f6f53b22b92a4b5061d88a8063f32c54f968d8334ec9cca4c935c7373bc"} Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.986484 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c87f6f53b22b92a4b5061d88a8063f32c54f968d8334ec9cca4c935c7373bc" Feb 02 07:07:21 crc kubenswrapper[4842]: I0202 07:07:21.986594 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6htfz" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.152565 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:07:22 crc kubenswrapper[4842]: E0202 07:07:22.153401 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerName="nova-cell0-conductor-db-sync" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.153424 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerName="nova-cell0-conductor-db-sync" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.153671 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" containerName="nova-cell0-conductor-db-sync" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.154411 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.156923 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zt7nb" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.157130 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.165788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.293333 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.293472 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.293555 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.396121 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.396282 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.396481 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.406556 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.406713 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.428492 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"nova-cell0-conductor-0\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.508189 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.833316 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:07:22 crc kubenswrapper[4842]: I0202 07:07:22.917618 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 02 07:07:23 crc kubenswrapper[4842]: I0202 07:07:23.000743 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerStarted","Data":"85e914a150668613743c13aeff477024d4b0461bd9157d8138fdfcfd7144ee67"} Feb 02 07:07:24 crc kubenswrapper[4842]: I0202 07:07:24.015475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerStarted","Data":"75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4"} Feb 02 07:07:24 crc kubenswrapper[4842]: I0202 07:07:24.018288 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:24 crc kubenswrapper[4842]: I0202 07:07:24.045731 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.045703817 podStartE2EDuration="2.045703817s" podCreationTimestamp="2026-02-02 07:07:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:24.03776201 +0000 UTC m=+1269.415029962" watchObservedRunningTime="2026-02-02 07:07:24.045703817 +0000 UTC m=+1269.422971759" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.068351 4842 generic.go:334] "Generic (PLEG): container finished" podID="804c0232-0b21-4b4a-973e-620fef26b1de" containerID="23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009" exitCode=137 Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.068414 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009"} Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.069287 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"804c0232-0b21-4b4a-973e-620fef26b1de","Type":"ContainerDied","Data":"610ef45c658d7af4f1bfccb5ab1bcf0f7f84312f0fd214a19b9a637d039efaf5"} Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.069309 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610ef45c658d7af4f1bfccb5ab1bcf0f7f84312f0fd214a19b9a637d039efaf5" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.075065 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148057 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148154 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148178 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148260 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148339 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148449 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.148501 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") pod \"804c0232-0b21-4b4a-973e-620fef26b1de\" (UID: \"804c0232-0b21-4b4a-973e-620fef26b1de\") " Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.149928 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.150118 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.156563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts" (OuterVolumeSpecName: "scripts") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.160627 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr" (OuterVolumeSpecName: "kube-api-access-dp7mr") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "kube-api-access-dp7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.210051 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251528 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251578 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp7mr\" (UniqueName: \"kubernetes.io/projected/804c0232-0b21-4b4a-973e-620fef26b1de-kube-api-access-dp7mr\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251601 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251620 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.251638 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/804c0232-0b21-4b4a-973e-620fef26b1de-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.263690 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.287749 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data" (OuterVolumeSpecName: "config-data") pod "804c0232-0b21-4b4a-973e-620fef26b1de" (UID: "804c0232-0b21-4b4a-973e-620fef26b1de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.353970 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:29 crc kubenswrapper[4842]: I0202 07:07:29.354011 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/804c0232-0b21-4b4a-973e-620fef26b1de-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.078740 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.104208 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.113937 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.143631 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144076 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144102 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144122 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144132 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144160 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144169 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: E0202 07:07:30.144183 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144192 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144489 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="sg-core" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144531 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="proxy-httpd" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144554 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-central-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.144571 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" containerName="ceilometer-notification-agent" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.147117 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.151686 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.151876 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.164563 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272016 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272107 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272128 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272173 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272193 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.272293 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373799 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373890 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373928 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.373965 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374074 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374889 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374959 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.374982 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.375062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.381526 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.381868 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.389841 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.398384 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.405746 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"ceilometer-0\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " pod="openstack/ceilometer-0" Feb 02 07:07:30 crc kubenswrapper[4842]: I0202 07:07:30.476352 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:07:31 crc kubenswrapper[4842]: W0202 07:07:31.014327 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf0e5e43_2690_43bd_8bc5_412e93b15aa7.slice/crio-11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d WatchSource:0}: Error finding container 11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d: Status 404 returned error can't find the container with id 11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d Feb 02 07:07:31 crc kubenswrapper[4842]: I0202 07:07:31.016208 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:07:31 crc kubenswrapper[4842]: I0202 07:07:31.099334 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d"} Feb 02 07:07:31 crc kubenswrapper[4842]: I0202 07:07:31.450024 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804c0232-0b21-4b4a-973e-620fef26b1de" path="/var/lib/kubelet/pods/804c0232-0b21-4b4a-973e-620fef26b1de/volumes" Feb 02 07:07:32 crc kubenswrapper[4842]: I0202 07:07:32.112788 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c"} Feb 02 07:07:32 crc kubenswrapper[4842]: I0202 07:07:32.548150 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.069401 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.070968 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.075769 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.076049 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.082150 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.141737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.141836 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.142064 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.142277 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.172950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70"} Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.173009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3"} Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244256 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244329 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244395 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.244444 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.250233 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.251428 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.256325 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.264459 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.264832 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.276870 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.293625 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.309337 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.310959 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.313986 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.319992 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"nova-cell0-cell-mapping-d648k\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.346302 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.346376 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.346403 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.349258 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.396397 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.403659 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.407126 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.446422 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.447993 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448039 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448096 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448116 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448173 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448197 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.448238 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.471404 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.491189 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.491243 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.492424 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.508469 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.522260 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.522844 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"nova-scheduler-0\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.534903 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555426 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555475 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555550 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555604 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555628 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555661 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.555687 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.559019 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.559342 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.561272 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.571406 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"nova-api-0\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.587295 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.589734 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.601925 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.659936 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.659989 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660035 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660054 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660073 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660101 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660157 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660196 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660225 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660250 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660278 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660293 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.660314 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.661498 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.665231 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.669917 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.679897 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"nova-metadata-0\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.712302 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.729153 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.747411 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.763983 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.764024 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.764462 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.764549 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765086 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765111 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765153 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765257 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765271 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.765289 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.766109 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769482 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769584 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769699 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.769723 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.772502 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.782480 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"nova-cell1-novncproxy-0\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.782547 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"dnsmasq-dns-557bbc7df7-8rcz9\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.838699 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:33 crc kubenswrapper[4842]: I0202 07:07:33.915469 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.001854 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:07:34 crc kubenswrapper[4842]: W0202 07:07:34.020334 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1048c2f_1504_465a_b0fb_da368d25f0ff.slice/crio-317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33 WatchSource:0}: Error finding container 317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33: Status 404 returned error can't find the container with id 317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33 Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.165820 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.167618 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.172824 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.173043 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.195149 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.196493 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerStarted","Data":"317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33"} Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.275945 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.276040 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.276143 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.276195 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.301364 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.338966 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.342689 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378489 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378556 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378652 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.378692 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.391984 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.392933 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.404370 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.412873 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"nova-cell1-conductor-db-sync-pnj4n\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.496532 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.503898 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:34 crc kubenswrapper[4842]: I0202 07:07:34.548707 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.005576 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.218440 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerStarted","Data":"9f46e2c0ade54ebb64e6e6a408030ea704892c226f6722e2d58e5f583b4c2039"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.223322 4842 generic.go:334] "Generic (PLEG): container finished" podID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerID="a176e8b4ea564bc302309fcba58a47b8e68f174edeb83a184476a852cc3c272e" exitCode=0 Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.223392 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerDied","Data":"a176e8b4ea564bc302309fcba58a47b8e68f174edeb83a184476a852cc3c272e"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.223417 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerStarted","Data":"451377c79842f0376185bd4f8a1618a4b5a16afcc7be3c0724fb62e157fb3755"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.230308 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerStarted","Data":"55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.232095 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerStarted","Data":"0b0025ccff75b8a427586c74f1235a072bc0cd643e505e2735b58d50091fa295"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.233136 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerStarted","Data":"96da2ab68db04d21f4a7c4434a8ff3b113106acfae59f50f9689e724aa76088b"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.234039 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerStarted","Data":"4399ed66cbe5ee83e1b05af70a328b096fc6683212b7ff5ef2c0328dbfd1bfc0"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.253536 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerStarted","Data":"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.254378 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.272535 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerStarted","Data":"f408d96c1a5dcbacb2299cd3630fe7dab0d27ba0d70de87656f8d0bbabc0a580"} Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.294125 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-d648k" podStartSLOduration=2.294102532 podStartE2EDuration="2.294102532s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:35.262145269 +0000 UTC m=+1280.639413181" watchObservedRunningTime="2026-02-02 07:07:35.294102532 +0000 UTC m=+1280.671370444" Feb 02 07:07:35 crc kubenswrapper[4842]: I0202 07:07:35.329907 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5813434709999998 podStartE2EDuration="5.329886959s" podCreationTimestamp="2026-02-02 07:07:30 +0000 UTC" firstStartedPulling="2026-02-02 07:07:31.017910297 +0000 UTC m=+1276.395178249" lastFinishedPulling="2026-02-02 07:07:34.766453805 +0000 UTC m=+1280.143721737" observedRunningTime="2026-02-02 07:07:35.285800206 +0000 UTC m=+1280.663068118" watchObservedRunningTime="2026-02-02 07:07:35.329886959 +0000 UTC m=+1280.707154871" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.311376 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerStarted","Data":"5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52"} Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.312457 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.317313 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerStarted","Data":"2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398"} Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.348201 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" podStartSLOduration=3.348180364 podStartE2EDuration="3.348180364s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:36.329709805 +0000 UTC m=+1281.706977737" watchObservedRunningTime="2026-02-02 07:07:36.348180364 +0000 UTC m=+1281.725448276" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.380449 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" podStartSLOduration=2.380430393 podStartE2EDuration="2.380430393s" podCreationTimestamp="2026-02-02 07:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:36.344689597 +0000 UTC m=+1281.721957509" watchObservedRunningTime="2026-02-02 07:07:36.380430393 +0000 UTC m=+1281.757698305" Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.912491 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:36 crc kubenswrapper[4842]: I0202 07:07:36.942642 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.351625 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerStarted","Data":"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.352059 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" gracePeriod=30 Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.357763 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerStarted","Data":"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.357841 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerStarted","Data":"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.361702 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerStarted","Data":"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerStarted","Data":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367396 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerStarted","Data":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367683 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" containerID="cri-o://09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" gracePeriod=30 Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.367709 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" containerID="cri-o://595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" gracePeriod=30 Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.374626 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.637756205 podStartE2EDuration="6.374609753s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.503256388 +0000 UTC m=+1279.880524300" lastFinishedPulling="2026-02-02 07:07:38.240109936 +0000 UTC m=+1283.617377848" observedRunningTime="2026-02-02 07:07:39.372626643 +0000 UTC m=+1284.749894565" watchObservedRunningTime="2026-02-02 07:07:39.374609753 +0000 UTC m=+1284.751877665" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.401528 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.509397201 podStartE2EDuration="6.40150408s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.347983377 +0000 UTC m=+1279.725251289" lastFinishedPulling="2026-02-02 07:07:38.240090266 +0000 UTC m=+1283.617358168" observedRunningTime="2026-02-02 07:07:39.392094236 +0000 UTC m=+1284.769362178" watchObservedRunningTime="2026-02-02 07:07:39.40150408 +0000 UTC m=+1284.778772002" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.408832 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.517085501 podStartE2EDuration="6.408814951s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.348433628 +0000 UTC m=+1279.725701540" lastFinishedPulling="2026-02-02 07:07:38.240163078 +0000 UTC m=+1283.617430990" observedRunningTime="2026-02-02 07:07:39.407302373 +0000 UTC m=+1284.784570295" watchObservedRunningTime="2026-02-02 07:07:39.408814951 +0000 UTC m=+1284.786082873" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.426838 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.491007355 podStartE2EDuration="6.426820868s" podCreationTimestamp="2026-02-02 07:07:33 +0000 UTC" firstStartedPulling="2026-02-02 07:07:34.307261167 +0000 UTC m=+1279.684529079" lastFinishedPulling="2026-02-02 07:07:38.24307466 +0000 UTC m=+1283.620342592" observedRunningTime="2026-02-02 07:07:39.425355571 +0000 UTC m=+1284.802623493" watchObservedRunningTime="2026-02-02 07:07:39.426820868 +0000 UTC m=+1284.804088780" Feb 02 07:07:39 crc kubenswrapper[4842]: I0202 07:07:39.985944 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119513 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119542 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.119730 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") pod \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\" (UID: \"17bc6e8b-33ee-4fee-be1c-2a38b81b6984\") " Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.120091 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs" (OuterVolumeSpecName: "logs") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.125509 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht" (OuterVolumeSpecName: "kube-api-access-xncht") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "kube-api-access-xncht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.146276 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.153384 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data" (OuterVolumeSpecName: "config-data") pod "17bc6e8b-33ee-4fee-be1c-2a38b81b6984" (UID: "17bc6e8b-33ee-4fee-be1c-2a38b81b6984"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221632 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221665 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221676 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xncht\" (UniqueName: \"kubernetes.io/projected/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-kube-api-access-xncht\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.221687 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17bc6e8b-33ee-4fee-be1c-2a38b81b6984-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.399009 4842 generic.go:334] "Generic (PLEG): container finished" podID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" exitCode=0 Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.399053 4842 generic.go:334] "Generic (PLEG): container finished" podID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" exitCode=143 Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400607 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400773 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerDied","Data":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400936 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerDied","Data":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.400953 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"17bc6e8b-33ee-4fee-be1c-2a38b81b6984","Type":"ContainerDied","Data":"0b0025ccff75b8a427586c74f1235a072bc0cd643e505e2735b58d50091fa295"} Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.401002 4842 scope.go:117] "RemoveContainer" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.424166 4842 scope.go:117] "RemoveContainer" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.449476 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.477453 4842 scope.go:117] "RemoveContainer" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.477937 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": container with ID starting with 595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673 not found: ID does not exist" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.477970 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} err="failed to get container status \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": rpc error: code = NotFound desc = could not find container \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": container with ID starting with 595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.477991 4842 scope.go:117] "RemoveContainer" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.479181 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": container with ID starting with 09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6 not found: ID does not exist" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479208 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} err="failed to get container status \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": rpc error: code = NotFound desc = could not find container \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": container with ID starting with 09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479241 4842 scope.go:117] "RemoveContainer" containerID="595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479475 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673"} err="failed to get container status \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": rpc error: code = NotFound desc = could not find container \"595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673\": container with ID starting with 595649bfe3b98b342c9dde53433e711cb414625b7332937e6cccf886c987f673 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479495 4842 scope.go:117] "RemoveContainer" containerID="09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479539 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.479686 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6"} err="failed to get container status \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": rpc error: code = NotFound desc = could not find container \"09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6\": container with ID starting with 09049ffa66881e23dc81683044c1f242aaaddf3cca17debc8d3b184943dedbf6 not found: ID does not exist" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.490961 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.491428 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491445 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" Feb 02 07:07:40 crc kubenswrapper[4842]: E0202 07:07:40.491483 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491492 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491764 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-metadata" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.491781 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" containerName="nova-metadata-log" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.492926 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.501598 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.501704 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.513775 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.630144 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.630961 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.631022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.631052 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.631086 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.732869 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733003 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733044 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733067 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733091 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.733846 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.741061 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.741454 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.741554 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.761381 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"nova-metadata-0\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " pod="openstack/nova-metadata-0" Feb 02 07:07:40 crc kubenswrapper[4842]: I0202 07:07:40.813052 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:41 crc kubenswrapper[4842]: I0202 07:07:41.314394 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:41 crc kubenswrapper[4842]: W0202 07:07:41.316983 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0101d15_442a_47f8_9c48_f9c028c63b8b.slice/crio-616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d WatchSource:0}: Error finding container 616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d: Status 404 returned error can't find the container with id 616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d Feb 02 07:07:41 crc kubenswrapper[4842]: I0202 07:07:41.412177 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerStarted","Data":"616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d"} Feb 02 07:07:41 crc kubenswrapper[4842]: I0202 07:07:41.453958 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bc6e8b-33ee-4fee-be1c-2a38b81b6984" path="/var/lib/kubelet/pods/17bc6e8b-33ee-4fee-be1c-2a38b81b6984/volumes" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.146498 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.146900 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.146968 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.148029 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.148138 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49" gracePeriod=600 Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.442009 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49" exitCode=0 Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.442114 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.443575 4842 scope.go:117] "RemoveContainer" containerID="fb1eaa0cb5ca379afdcc3758df45691293fe02d27ef7a46aa4f4235e0fb79a62" Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.446380 4842 generic.go:334] "Generic (PLEG): container finished" podID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerID="55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791" exitCode=0 Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.446453 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerDied","Data":"55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.449326 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerStarted","Data":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.449370 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerStarted","Data":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} Feb 02 07:07:42 crc kubenswrapper[4842]: I0202 07:07:42.489850 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.489831685 podStartE2EDuration="2.489831685s" podCreationTimestamp="2026-02-02 07:07:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:42.482570635 +0000 UTC m=+1287.859838547" watchObservedRunningTime="2026-02-02 07:07:42.489831685 +0000 UTC m=+1287.867099597" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.464203 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87"} Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.468480 4842 generic.go:334] "Generic (PLEG): container finished" podID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerID="2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398" exitCode=0 Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.468603 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerDied","Data":"2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398"} Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.713149 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.713410 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.730282 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.730321 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.766861 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.839647 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.916486 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.954479 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.975969 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:07:43 crc kubenswrapper[4842]: I0202 07:07:43.978231 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" containerID="cri-o://ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" gracePeriod=10 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.113830 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.114153 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.114319 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.114484 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") pod \"a1048c2f-1504-465a-b0fb-da368d25f0ff\" (UID: \"a1048c2f-1504-465a-b0fb-da368d25f0ff\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.136522 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts" (OuterVolumeSpecName: "scripts") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.136563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk" (OuterVolumeSpecName: "kube-api-access-t4gwk") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "kube-api-access-t4gwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.158054 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data" (OuterVolumeSpecName: "config-data") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.164433 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1048c2f-1504-465a-b0fb-da368d25f0ff" (UID: "a1048c2f-1504-465a-b0fb-da368d25f0ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218371 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218537 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218577 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1048c2f-1504-465a-b0fb-da368d25f0ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.218627 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4gwk\" (UniqueName: \"kubernetes.io/projected/a1048c2f-1504-465a-b0fb-da368d25f0ff-kube-api-access-t4gwk\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.419670 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482055 4842 generic.go:334] "Generic (PLEG): container finished" podID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" exitCode=0 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482110 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482146 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerDied","Data":"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb"} Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482205 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bfc9b94f-zwbb4" event={"ID":"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16","Type":"ContainerDied","Data":"1e6b63a560dc8cb262f32d7a92ff245402cd7c329b5c9d29fa17e9ebc50d169c"} Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.482258 4842 scope.go:117] "RemoveContainer" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.486146 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d648k" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.486269 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d648k" event={"ID":"a1048c2f-1504-465a-b0fb-da368d25f0ff","Type":"ContainerDied","Data":"317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33"} Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.486816 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="317455acddd1ce3bbfb59ec4c92389c4d99285f875b3cfea6fe1f8ad4e3dad33" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523547 4842 scope.go:117] "RemoveContainer" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523778 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523874 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523915 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.523943 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.524049 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.524081 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") pod \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\" (UID: \"0e3c4cab-c86f-4819-8d09-ac45ccb6ea16\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.528998 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.544132 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn" (OuterVolumeSpecName: "kube-api-access-nbgnn") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "kube-api-access-nbgnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.574180 4842 scope.go:117] "RemoveContainer" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" Feb 02 07:07:44 crc kubenswrapper[4842]: E0202 07:07:44.578125 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb\": container with ID starting with ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb not found: ID does not exist" containerID="ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.578163 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb"} err="failed to get container status \"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb\": rpc error: code = NotFound desc = could not find container \"ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb\": container with ID starting with ded17f227db2c861bcd18849f326f400b19bd42b6b572e71db0154b4815da1cb not found: ID does not exist" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.578188 4842 scope.go:117] "RemoveContainer" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" Feb 02 07:07:44 crc kubenswrapper[4842]: E0202 07:07:44.582685 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775\": container with ID starting with 69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775 not found: ID does not exist" containerID="69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.582716 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775"} err="failed to get container status \"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775\": rpc error: code = NotFound desc = could not find container \"69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775\": container with ID starting with 69afbd01ab369f9ef7aca7e64e6b27b9c62915c91cb3c8a3caf0848c2efc9775 not found: ID does not exist" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.594238 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.594620 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" containerID="cri-o://e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.594749 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" containerID="cri-o://c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.605888 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.607998 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": EOF" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.608127 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": EOF" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.610460 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.610689 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" containerID="cri-o://142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.611073 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" containerID="cri-o://90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" gracePeriod=30 Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.617634 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config" (OuterVolumeSpecName: "config") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.621804 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.622414 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.624393 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626546 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626570 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbgnn\" (UniqueName: \"kubernetes.io/projected/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-kube-api-access-nbgnn\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626579 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626588 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.626596 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.653364 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" (UID: "0e3c4cab-c86f-4819-8d09-ac45ccb6ea16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.729158 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.851480 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.875856 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.887299 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bfc9b94f-zwbb4"] Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.935999 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.936921 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.936948 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.937169 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") pod \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\" (UID: \"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2\") " Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.943305 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts" (OuterVolumeSpecName: "scripts") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.961615 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g" (OuterVolumeSpecName: "kube-api-access-7668g") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "kube-api-access-7668g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.964661 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data" (OuterVolumeSpecName: "config-data") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:44 crc kubenswrapper[4842]: I0202 07:07:44.997322 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" (UID: "d0854221-b7f1-4e7c-89bc-b9f14d1b29c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041536 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041565 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041575 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7668g\" (UniqueName: \"kubernetes.io/projected/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-kube-api-access-7668g\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.041585 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.207671 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346450 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346606 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346707 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346806 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") pod \"d0101d15-442a-47f8-9c48-f9c028c63b8b\" (UID: \"d0101d15-442a-47f8-9c48-f9c028c63b8b\") " Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.346919 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs" (OuterVolumeSpecName: "logs") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.348291 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0101d15-442a-47f8-9c48-f9c028c63b8b-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.350257 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt" (OuterVolumeSpecName: "kube-api-access-xb5kt") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "kube-api-access-xb5kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.383503 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.384319 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data" (OuterVolumeSpecName: "config-data") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.406893 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d0101d15-442a-47f8-9c48-f9c028c63b8b" (UID: "d0101d15-442a-47f8-9c48-f9c028c63b8b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453527 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453804 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453880 4842 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0101d15-442a-47f8-9c48-f9c028c63b8b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.453949 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb5kt\" (UniqueName: \"kubernetes.io/projected/d0101d15-442a-47f8-9c48-f9c028c63b8b-kube-api-access-xb5kt\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.469811 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" path="/var/lib/kubelet/pods/0e3c4cab-c86f-4819-8d09-ac45ccb6ea16/volumes" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498074 4842 generic.go:334] "Generic (PLEG): container finished" podID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" exitCode=0 Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498105 4842 generic.go:334] "Generic (PLEG): container finished" podID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" exitCode=143 Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498130 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498148 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerDied","Data":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498182 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerDied","Data":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498192 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d0101d15-442a-47f8-9c48-f9c028c63b8b","Type":"ContainerDied","Data":"616cb3925a3da51c5010240956da0ab2f52a9616e59b12676e8e7128e438074d"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.498259 4842 scope.go:117] "RemoveContainer" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.500682 4842 generic.go:334] "Generic (PLEG): container finished" podID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" exitCode=143 Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.500731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerDied","Data":"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.505041 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.505498 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pnj4n" event={"ID":"d0854221-b7f1-4e7c-89bc-b9f14d1b29c2","Type":"ContainerDied","Data":"f408d96c1a5dcbacb2299cd3630fe7dab0d27ba0d70de87656f8d0bbabc0a580"} Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.505539 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f408d96c1a5dcbacb2299cd3630fe7dab0d27ba0d70de87656f8d0bbabc0a580" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.544620 4842 scope.go:117] "RemoveContainer" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.544951 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.557930 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583066 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583497 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerName="nova-manage" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583513 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerName="nova-manage" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583524 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583531 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583547 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="init" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583555 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="init" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583564 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583569 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583581 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerName="nova-cell1-conductor-db-sync" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583587 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerName="nova-cell1-conductor-db-sync" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.583602 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583608 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583779 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3c4cab-c86f-4819-8d09-ac45ccb6ea16" containerName="dnsmasq-dns" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583794 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" containerName="nova-cell1-conductor-db-sync" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583808 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-metadata" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583832 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" containerName="nova-metadata-log" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.583845 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" containerName="nova-manage" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.584724 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587304 4842 scope.go:117] "RemoveContainer" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587504 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587556 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.587609 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": container with ID starting with 90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5 not found: ID does not exist" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587642 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} err="failed to get container status \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": rpc error: code = NotFound desc = could not find container \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": container with ID starting with 90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5 not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.587662 4842 scope.go:117] "RemoveContainer" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: E0202 07:07:45.587978 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": container with ID starting with 142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f not found: ID does not exist" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.588001 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} err="failed to get container status \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": rpc error: code = NotFound desc = could not find container \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": container with ID starting with 142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.588018 4842 scope.go:117] "RemoveContainer" containerID="90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.588265 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5"} err="failed to get container status \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": rpc error: code = NotFound desc = could not find container \"90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5\": container with ID starting with 90f5177852bedec9ea53134ab656f7ec746551249e4791b389f9f86826379aa5 not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.589703 4842 scope.go:117] "RemoveContainer" containerID="142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.590001 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f"} err="failed to get container status \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": rpc error: code = NotFound desc = could not find container \"142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f\": container with ID starting with 142a11b1b5312626f94b53bb27e7f9866a7b14d27e25154a1c540577fd55100f not found: ID does not exist" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.605634 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.619051 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.622730 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.625654 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.640018 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760551 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760622 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760650 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760720 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760764 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760783 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760801 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.760821 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868438 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868505 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868526 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868544 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868565 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868594 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868628 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.868651 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.872527 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.874555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.874651 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.876317 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.880327 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.884991 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.903867 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"nova-cell1-conductor-0\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.906409 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"nova-metadata-0\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.932382 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:07:45 crc kubenswrapper[4842]: I0202 07:07:45.942849 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.424922 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.518592 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerStarted","Data":"f8175b6df5dfbdeb4f2b96118c96bb8462df0286a53b3bdcaea8cf46054c0053"} Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.520725 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:07:46 crc kubenswrapper[4842]: I0202 07:07:46.518664 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" containerID="cri-o://4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" gracePeriod=30 Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.448857 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0101d15-442a-47f8-9c48-f9c028c63b8b" path="/var/lib/kubelet/pods/d0101d15-442a-47f8-9c48-f9c028c63b8b/volumes" Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.528777 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerStarted","Data":"582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.528855 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerStarted","Data":"e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.528873 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerStarted","Data":"a1edffd6229fcfd445e770ea5551a81134a2ceed05cbf411c15f38de72a6bfa9"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.532820 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerStarted","Data":"b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2"} Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.533059 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.565459 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.565432606 podStartE2EDuration="2.565432606s" podCreationTimestamp="2026-02-02 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:47.552044334 +0000 UTC m=+1292.929312256" watchObservedRunningTime="2026-02-02 07:07:47.565432606 +0000 UTC m=+1292.942700538" Feb 02 07:07:47 crc kubenswrapper[4842]: I0202 07:07:47.591269 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5912473560000002 podStartE2EDuration="2.591247356s" podCreationTimestamp="2026-02-02 07:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:47.576576022 +0000 UTC m=+1292.953843934" watchObservedRunningTime="2026-02-02 07:07:47.591247356 +0000 UTC m=+1292.968515268" Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.716528 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.719432 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.721812 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:07:48 crc kubenswrapper[4842]: E0202 07:07:48.721988 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.468113 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.567794 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") pod \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.568024 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") pod \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.568066 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") pod \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\" (UID: \"6d440b49-02aa-4a41-9055-8c58b5f9b1f9\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569772 4842 generic.go:334] "Generic (PLEG): container finished" podID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" exitCode=0 Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569824 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerDied","Data":"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569847 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6d440b49-02aa-4a41-9055-8c58b5f9b1f9","Type":"ContainerDied","Data":"9f46e2c0ade54ebb64e6e6a408030ea704892c226f6722e2d58e5f583b4c2039"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569864 4842 scope.go:117] "RemoveContainer" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.569999 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.570788 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.572648 4842 generic.go:334] "Generic (PLEG): container finished" podID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" exitCode=0 Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.572668 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerDied","Data":"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.572683 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b930b76-12ee-4261-b822-7fbfe5bcdec7","Type":"ContainerDied","Data":"4399ed66cbe5ee83e1b05af70a328b096fc6683212b7ff5ef2c0328dbfd1bfc0"} Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.575004 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq" (OuterVolumeSpecName: "kube-api-access-wkhsq") pod "6d440b49-02aa-4a41-9055-8c58b5f9b1f9" (UID: "6d440b49-02aa-4a41-9055-8c58b5f9b1f9"). InnerVolumeSpecName "kube-api-access-wkhsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.595567 4842 scope.go:117] "RemoveContainer" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.596554 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb\": container with ID starting with 4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb not found: ID does not exist" containerID="4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.596610 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb"} err="failed to get container status \"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb\": rpc error: code = NotFound desc = could not find container \"4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb\": container with ID starting with 4e2c9a3c3fa64a744baf07d94d9a86415c44e5fe85bce79da7fd73894b2f5ebb not found: ID does not exist" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.596651 4842 scope.go:117] "RemoveContainer" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.603146 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data" (OuterVolumeSpecName: "config-data") pod "6d440b49-02aa-4a41-9055-8c58b5f9b1f9" (UID: "6d440b49-02aa-4a41-9055-8c58b5f9b1f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.610958 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d440b49-02aa-4a41-9055-8c58b5f9b1f9" (UID: "6d440b49-02aa-4a41-9055-8c58b5f9b1f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.617803 4842 scope.go:117] "RemoveContainer" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.637392 4842 scope.go:117] "RemoveContainer" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.637774 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea\": container with ID starting with c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea not found: ID does not exist" containerID="c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.637801 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea"} err="failed to get container status \"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea\": rpc error: code = NotFound desc = could not find container \"c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea\": container with ID starting with c1fc8fa74b4b27c5cf7de3e18e8ae32023df5ef85a2c5c752536859fc8491aea not found: ID does not exist" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.637821 4842 scope.go:117] "RemoveContainer" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.638912 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229\": container with ID starting with e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229 not found: ID does not exist" containerID="e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.638976 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229"} err="failed to get container status \"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229\": rpc error: code = NotFound desc = could not find container \"e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229\": container with ID starting with e559de9abcafad5f9aa8785fa7cef399303f4ad584fe55b639a8918a43693229 not found: ID does not exist" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670279 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670349 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670375 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") pod \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\" (UID: \"1b930b76-12ee-4261-b822-7fbfe5bcdec7\") " Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670776 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670793 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.670804 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkhsq\" (UniqueName: \"kubernetes.io/projected/6d440b49-02aa-4a41-9055-8c58b5f9b1f9-kube-api-access-wkhsq\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.671198 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs" (OuterVolumeSpecName: "logs") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.674831 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq" (OuterVolumeSpecName: "kube-api-access-zh6zq") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "kube-api-access-zh6zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.696112 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data" (OuterVolumeSpecName: "config-data") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.696923 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b930b76-12ee-4261-b822-7fbfe5bcdec7" (UID: "1b930b76-12ee-4261-b822-7fbfe5bcdec7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772307 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772350 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh6zq\" (UniqueName: \"kubernetes.io/projected/1b930b76-12ee-4261-b822-7fbfe5bcdec7-kube-api-access-zh6zq\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772366 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b930b76-12ee-4261-b822-7fbfe5bcdec7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.772380 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b930b76-12ee-4261-b822-7fbfe5bcdec7-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.900228 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.908831 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.923686 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.924119 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924138 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.924157 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924166 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: E0202 07:07:50.924179 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924190 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924428 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-api" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924461 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" containerName="nova-api-log" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.924477 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" containerName="nova-scheduler-scheduler" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.925112 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.927331 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.932705 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.932789 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:07:50 crc kubenswrapper[4842]: I0202 07:07:50.938178 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.005825 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.005904 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.006236 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.108036 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.108954 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.109062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.113783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.113896 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.125934 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"nova-scheduler-0\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.300513 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.446772 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d440b49-02aa-4a41-9055-8c58b5f9b1f9" path="/var/lib/kubelet/pods/6d440b49-02aa-4a41-9055-8c58b5f9b1f9/volumes" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.582345 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.605681 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.630412 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.638498 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.640581 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.643292 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.649352 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731672 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731741 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731774 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.731803 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.793613 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:07:51 crc kubenswrapper[4842]: W0202 07:07:51.802053 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46ba09a5_eecd_46b6_9182_96444c6de570.slice/crio-968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975 WatchSource:0}: Error finding container 968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975: Status 404 returned error can't find the container with id 968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975 Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.833922 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834014 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834063 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834107 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.834852 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.839900 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.840807 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.871172 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"nova-api-0\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " pod="openstack/nova-api-0" Feb 02 07:07:51 crc kubenswrapper[4842]: I0202 07:07:51.964998 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.577075 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:07:52 crc kubenswrapper[4842]: W0202 07:07:52.577799 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc80be6c0_a1f6_43d6_ba9d_9affaf8daff2.slice/crio-1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316 WatchSource:0}: Error finding container 1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316: Status 404 returned error can't find the container with id 1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316 Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.609068 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerStarted","Data":"fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c"} Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.609118 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerStarted","Data":"968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975"} Feb 02 07:07:52 crc kubenswrapper[4842]: I0202 07:07:52.616445 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerStarted","Data":"1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316"} Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:52.635416 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6353968180000003 podStartE2EDuration="2.635396818s" podCreationTimestamp="2026-02-02 07:07:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:52.6257974 +0000 UTC m=+1298.003065322" watchObservedRunningTime="2026-02-02 07:07:52.635396818 +0000 UTC m=+1298.012664730" Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.456704 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b930b76-12ee-4261-b822-7fbfe5bcdec7" path="/var/lib/kubelet/pods/1b930b76-12ee-4261-b822-7fbfe5bcdec7/volumes" Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.629743 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerStarted","Data":"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1"} Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.629826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerStarted","Data":"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5"} Feb 02 07:07:53 crc kubenswrapper[4842]: I0202 07:07:53.670602 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.670575351 podStartE2EDuration="2.670575351s" podCreationTimestamp="2026-02-02 07:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:07:53.657295462 +0000 UTC m=+1299.034563414" watchObservedRunningTime="2026-02-02 07:07:53.670575351 +0000 UTC m=+1299.047843303" Feb 02 07:07:55 crc kubenswrapper[4842]: I0202 07:07:55.932737 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:07:55 crc kubenswrapper[4842]: I0202 07:07:55.933304 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.006113 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.301438 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.954394 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:07:56 crc kubenswrapper[4842]: I0202 07:07:56.954968 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:00 crc kubenswrapper[4842]: I0202 07:08:00.489350 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.300757 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.342810 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.779950 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.966195 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:01 crc kubenswrapper[4842]: I0202 07:08:01.966286 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:03 crc kubenswrapper[4842]: I0202 07:08:03.048425 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:03 crc kubenswrapper[4842]: I0202 07:08:03.048440 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.343437 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.343883 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" containerID="cri-o://7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d" gracePeriod=30 Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.764007 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerID="7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d" exitCode=2 Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.764094 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerDied","Data":"7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d"} Feb 02 07:08:04 crc kubenswrapper[4842]: I0202 07:08:04.884528 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.059320 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") pod \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\" (UID: \"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14\") " Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.068885 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv" (OuterVolumeSpecName: "kube-api-access-2vmlv") pod "0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" (UID: "0d9bebc9-9e67-4019-bdf8-22e78dfc3d14"). InnerVolumeSpecName "kube-api-access-2vmlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.161619 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vmlv\" (UniqueName: \"kubernetes.io/projected/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14-kube-api-access-2vmlv\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.777922 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0d9bebc9-9e67-4019-bdf8-22e78dfc3d14","Type":"ContainerDied","Data":"db5e53906e871ace039a809b4c17e0f0a9393b7521bbea23546882f45795c673"} Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.778011 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.778281 4842 scope.go:117] "RemoveContainer" containerID="7ef2e70ff07365f726387024ecff0fabe2cd2d02cae00c3b439c9a6c10f2e47d" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.812901 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.836954 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.846360 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: E0202 07:08:05.846813 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.846831 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.847002 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" containerName="kube-state-metrics" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.847701 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.849369 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.849725 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.856446 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.938946 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.939233 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.944376 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.977006 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.977938 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.978157 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:05 crc kubenswrapper[4842]: I0202 07:08:05.978245 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080257 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080372 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080481 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.080525 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.085656 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.087055 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.095300 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.107191 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"kube-state-metrics-0\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.201385 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.207298 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.207762 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" containerID="cri-o://dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.207986 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" containerID="cri-o://de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.208102 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" containerID="cri-o://c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.208202 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" containerID="cri-o://178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" gracePeriod=30 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.702150 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:08:06 crc kubenswrapper[4842]: W0202 07:08:06.704534 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b11cfdf_ed7a_48ce_97eb_e03cd6be314c.slice/crio-c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f WatchSource:0}: Error finding container c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f: Status 404 returned error can't find the container with id c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796094 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" exitCode=0 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796484 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" exitCode=2 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796169 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796563 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796504 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" exitCode=0 Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.796609 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.798535 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerStarted","Data":"c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f"} Feb 02 07:08:06 crc kubenswrapper[4842]: I0202 07:08:06.807559 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:08:07 crc kubenswrapper[4842]: I0202 07:08:07.450771 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d9bebc9-9e67-4019-bdf8-22e78dfc3d14" path="/var/lib/kubelet/pods/0d9bebc9-9e67-4019-bdf8-22e78dfc3d14/volumes" Feb 02 07:08:07 crc kubenswrapper[4842]: I0202 07:08:07.809444 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerStarted","Data":"75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9"} Feb 02 07:08:07 crc kubenswrapper[4842]: I0202 07:08:07.827855 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.423788588 podStartE2EDuration="2.827835899s" podCreationTimestamp="2026-02-02 07:08:05 +0000 UTC" firstStartedPulling="2026-02-02 07:08:06.706745006 +0000 UTC m=+1312.084012928" lastFinishedPulling="2026-02-02 07:08:07.110792327 +0000 UTC m=+1312.488060239" observedRunningTime="2026-02-02 07:08:07.826600849 +0000 UTC m=+1313.203868761" watchObservedRunningTime="2026-02-02 07:08:07.827835899 +0000 UTC m=+1313.205103811" Feb 02 07:08:08 crc kubenswrapper[4842]: I0202 07:08:08.821249 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.553260 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.657985 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658083 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658138 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658183 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658205 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658244 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658297 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") pod \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\" (UID: \"cf0e5e43-2690-43bd-8bc5-412e93b15aa7\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.658861 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.659981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.666447 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c" (OuterVolumeSpecName: "kube-api-access-r667c") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "kube-api-access-r667c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.682503 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts" (OuterVolumeSpecName: "scripts") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.723544 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760652 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760888 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760898 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r667c\" (UniqueName: \"kubernetes.io/projected/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-kube-api-access-r667c\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760908 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.760915 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.767765 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.782380 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data" (OuterVolumeSpecName: "config-data") pod "cf0e5e43-2690-43bd-8bc5-412e93b15aa7" (UID: "cf0e5e43-2690-43bd-8bc5-412e93b15aa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.807780 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.836164 4842 generic.go:334] "Generic (PLEG): container finished" podID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" exitCode=137 Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837264 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerDied","Data":"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36","Type":"ContainerDied","Data":"96da2ab68db04d21f4a7c4434a8ff3b113106acfae59f50f9689e724aa76088b"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837428 4842 scope.go:117] "RemoveContainer" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.837668 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844280 4842 generic.go:334] "Generic (PLEG): container finished" podID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" exitCode=0 Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844440 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844488 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.844875 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cf0e5e43-2690-43bd-8bc5-412e93b15aa7","Type":"ContainerDied","Data":"11a6c57757bd099cc7d5233c6d0b0381d8088a06d822f2cec437e583d985118d"} Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") pod \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862055 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") pod \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862136 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") pod \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\" (UID: \"1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36\") " Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862788 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.862825 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0e5e43-2690-43bd-8bc5-412e93b15aa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.870500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98" (OuterVolumeSpecName: "kube-api-access-pst98") pod "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" (UID: "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36"). InnerVolumeSpecName "kube-api-access-pst98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.874379 4842 scope.go:117] "RemoveContainer" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.877311 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7\": container with ID starting with 3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7 not found: ID does not exist" containerID="3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.877364 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7"} err="failed to get container status \"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7\": rpc error: code = NotFound desc = could not find container \"3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7\": container with ID starting with 3469511ccff43b1ee6fd3291450d98a0112ccaac41021b8b1475c185a2a9fdc7 not found: ID does not exist" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.877394 4842 scope.go:117] "RemoveContainer" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.896497 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data" (OuterVolumeSpecName: "config-data") pod "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" (UID: "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.900247 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.905077 4842 scope.go:117] "RemoveContainer" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.911409 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917476 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917810 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917828 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917837 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917844 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917857 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917863 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917891 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917897 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:08:09 crc kubenswrapper[4842]: E0202 07:08:09.917910 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.917918 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918092 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-notification-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918110 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="sg-core" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918125 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918136 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="proxy-httpd" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.918144 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" containerName="ceilometer-central-agent" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.919652 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" (UID: "1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.919722 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.950820 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.951118 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.951906 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.961909 4842 scope.go:117] "RemoveContainer" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.963910 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.963990 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964019 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964045 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964105 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964139 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964156 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964176 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964244 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pst98\" (UniqueName: \"kubernetes.io/projected/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-kube-api-access-pst98\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964259 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.964272 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.980165 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:09 crc kubenswrapper[4842]: I0202 07:08:09.990209 4842 scope.go:117] "RemoveContainer" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.012377 4842 scope.go:117] "RemoveContainer" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.012876 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01\": container with ID starting with de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01 not found: ID does not exist" containerID="de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.012911 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01"} err="failed to get container status \"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01\": rpc error: code = NotFound desc = could not find container \"de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01\": container with ID starting with de54c85c664eebfb9f0ff8f62d6d8f496165521841ce9cb84ff69597b7e01b01 not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.012952 4842 scope.go:117] "RemoveContainer" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.013381 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70\": container with ID starting with c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70 not found: ID does not exist" containerID="c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013413 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70"} err="failed to get container status \"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70\": rpc error: code = NotFound desc = could not find container \"c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70\": container with ID starting with c9cbee5e2b6b132dbb12fd1119aa52ef677a82f95da8f0f9cc5627f485065f70 not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013436 4842 scope.go:117] "RemoveContainer" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.013725 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3\": container with ID starting with 178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3 not found: ID does not exist" containerID="178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013746 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3"} err="failed to get container status \"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3\": rpc error: code = NotFound desc = could not find container \"178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3\": container with ID starting with 178309bc38cc30e5625354e994a421729d94b675722d58e99b117553018f4ef3 not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013757 4842 scope.go:117] "RemoveContainer" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" Feb 02 07:08:10 crc kubenswrapper[4842]: E0202 07:08:10.013971 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c\": container with ID starting with dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c not found: ID does not exist" containerID="dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.013990 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c"} err="failed to get container status \"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c\": rpc error: code = NotFound desc = could not find container \"dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c\": container with ID starting with dc569d8f3de413d032683c9e0f08d75961dc5c32a972aa6f61cd2c9ca65e212c not found: ID does not exist" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066004 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066060 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066120 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066175 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066207 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066249 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066343 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066394 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.066836 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.069891 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.069957 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.070394 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.070759 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.071268 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.081265 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"ceilometer-0\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.246902 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.271643 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.271882 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.285469 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.286787 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.290359 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.290532 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.290377 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.295334 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.379628 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.379878 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.379902 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.380085 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.380291 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482355 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482377 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482488 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.482592 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.487810 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.489233 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.490822 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.498871 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.502283 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.683696 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:10 crc kubenswrapper[4842]: W0202 07:08:10.740354 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e9dbec6_ac74_4b3c_8c31_734a574dade3.slice/crio-ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92 WatchSource:0}: Error finding container ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92: Status 404 returned error can't find the container with id ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92 Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.746538 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:10 crc kubenswrapper[4842]: I0202 07:08:10.859750 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.004548 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:08:11 crc kubenswrapper[4842]: W0202 07:08:11.008119 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a6e38b7_4a6d_4d93_af3d_5abac4efc44d.slice/crio-c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8 WatchSource:0}: Error finding container c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8: Status 404 returned error can't find the container with id c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8 Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.453187 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36" path="/var/lib/kubelet/pods/1a05b52c-3e0b-458c-97ff-c5ef0f3a6f36/volumes" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.454768 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf0e5e43-2690-43bd-8bc5-412e93b15aa7" path="/var/lib/kubelet/pods/cf0e5e43-2690-43bd-8bc5-412e93b15aa7/volumes" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.883430 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.885767 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d","Type":"ContainerStarted","Data":"19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.885819 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d","Type":"ContainerStarted","Data":"c35e3662427ebf1f8f424857e434ccf28b83374ce8c58a3384c27005fe0af7e8"} Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.916812 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.916789691 podStartE2EDuration="1.916789691s" podCreationTimestamp="2026-02-02 07:08:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:11.914825432 +0000 UTC m=+1317.292093354" watchObservedRunningTime="2026-02-02 07:08:11.916789691 +0000 UTC m=+1317.294057613" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.973524 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.974563 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.980814 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:11 crc kubenswrapper[4842]: I0202 07:08:11.990996 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.897935 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.898584 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.898620 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} Feb 02 07:08:12 crc kubenswrapper[4842]: I0202 07:08:12.902023 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.086763 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.091586 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.107402 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139190 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139252 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139287 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139323 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139444 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.139600 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240602 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240647 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240693 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240729 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240745 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.240791 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.241639 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.242370 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.242862 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.243366 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.243872 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.258604 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"dnsmasq-dns-5ddd577785-8dp78\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.415919 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:13 crc kubenswrapper[4842]: I0202 07:08:13.959348 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:08:14 crc kubenswrapper[4842]: I0202 07:08:14.918097 4842 generic.go:334] "Generic (PLEG): container finished" podID="82827ec9-ac05-41ab-988c-99083ccdb949" containerID="8bb94b1491e283b01c189ac6006d3fc23945dfbdff62fb805e090497b073e7c4" exitCode=0 Feb 02 07:08:14 crc kubenswrapper[4842]: I0202 07:08:14.918201 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerDied","Data":"8bb94b1491e283b01c189ac6006d3fc23945dfbdff62fb805e090497b073e7c4"} Feb 02 07:08:14 crc kubenswrapper[4842]: I0202 07:08:14.918500 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerStarted","Data":"3b795fd687296b78b29dffde7f9f5a14bcbd688f6a97aac6389de0b8b43b6094"} Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.420650 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.684074 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.933480 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerStarted","Data":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.934687 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.938425 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" containerID="cri-o://04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" gracePeriod=30 Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.939355 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerStarted","Data":"b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3"} Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.939386 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.939435 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" containerID="cri-o://3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" gracePeriod=30 Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.963844 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:15 crc kubenswrapper[4842]: I0202 07:08:15.992566 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9197622340000002 podStartE2EDuration="6.992545885s" podCreationTimestamp="2026-02-02 07:08:09 +0000 UTC" firstStartedPulling="2026-02-02 07:08:10.742290512 +0000 UTC m=+1316.119558424" lastFinishedPulling="2026-02-02 07:08:14.815074153 +0000 UTC m=+1320.192342075" observedRunningTime="2026-02-02 07:08:15.983285005 +0000 UTC m=+1321.360552927" watchObservedRunningTime="2026-02-02 07:08:15.992545885 +0000 UTC m=+1321.369813797" Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.012163 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" podStartSLOduration=3.012140521 podStartE2EDuration="3.012140521s" podCreationTimestamp="2026-02-02 07:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:16.004099611 +0000 UTC m=+1321.381367543" watchObservedRunningTime="2026-02-02 07:08:16.012140521 +0000 UTC m=+1321.389408423" Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.221099 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.951064 4842 generic.go:334] "Generic (PLEG): container finished" podID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" exitCode=143 Feb 02 07:08:16 crc kubenswrapper[4842]: I0202 07:08:16.951149 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerDied","Data":"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5"} Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.961261 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" containerID="cri-o://f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" gracePeriod=30 Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.962364 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" containerID="cri-o://7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" gracePeriod=30 Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.962489 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" containerID="cri-o://dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" gracePeriod=30 Feb 02 07:08:17 crc kubenswrapper[4842]: I0202 07:08:17.962560 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" containerID="cri-o://f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" gracePeriod=30 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.702990 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754465 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754610 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754643 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754671 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754719 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754737 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754774 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.754812 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") pod \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\" (UID: \"3e9dbec6-ac74-4b3c-8c31-734a574dade3\") " Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.757573 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.758095 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.762169 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts" (OuterVolumeSpecName: "scripts") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.762589 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv" (OuterVolumeSpecName: "kube-api-access-bstwv") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "kube-api-access-bstwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.799364 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.810638 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.834158 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.856643 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.856969 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857062 4842 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857140 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857230 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e9dbec6-ac74-4b3c-8c31-734a574dade3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857319 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bstwv\" (UniqueName: \"kubernetes.io/projected/3e9dbec6-ac74-4b3c-8c31-734a574dade3-kube-api-access-bstwv\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.857396 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.874175 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data" (OuterVolumeSpecName: "config-data") pod "3e9dbec6-ac74-4b3c-8c31-734a574dade3" (UID: "3e9dbec6-ac74-4b3c-8c31-734a574dade3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.959770 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e9dbec6-ac74-4b3c-8c31-734a574dade3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987245 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" exitCode=0 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987275 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" exitCode=2 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987286 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" exitCode=0 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987294 4842 generic.go:334] "Generic (PLEG): container finished" podID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" exitCode=0 Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987318 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987328 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987350 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987367 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987379 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987388 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e9dbec6-ac74-4b3c-8c31-734a574dade3","Type":"ContainerDied","Data":"ecc01ca8f44e82d84f820f5c98e74898089c47ea6d2ab1ec8e4f74d3d256fd92"} Feb 02 07:08:18 crc kubenswrapper[4842]: I0202 07:08:18.987403 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.017268 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.032446 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.041638 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.057042 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.073803 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074259 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074272 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074296 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074302 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074324 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074330 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.074343 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074349 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074520 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="sg-core" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074530 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-central-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074538 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="proxy-httpd" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.074556 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" containerName="ceilometer-notification-agent" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.076264 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.082655 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.083727 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.084498 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.092128 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.092994 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.163495 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.163805 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.163920 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164018 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164247 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164372 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164409 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.164506 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.256813 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.258483 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.258535 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.258582 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.258930 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.258986 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259018 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.259325 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259354 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259377 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: E0202 07:08:19.259633 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259677 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259694 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259931 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.259951 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260164 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260183 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260455 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260470 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260725 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260743 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.260985 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261012 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261456 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261481 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261750 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.261772 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262104 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262130 4842 scope.go:117] "RemoveContainer" containerID="7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262430 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a"} err="failed to get container status \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": rpc error: code = NotFound desc = could not find container \"7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a\": container with ID starting with 7300c59526f673d2f6ac56ca198c6cbd05d34b94f837009c7e580de96cbe635a not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.262457 4842 scope.go:117] "RemoveContainer" containerID="dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263550 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868"} err="failed to get container status \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": rpc error: code = NotFound desc = could not find container \"dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868\": container with ID starting with dafb738c5a9a4f872263f4619c124521c6d21e6cb2e3cbb2cfcfccf2302d7868 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263583 4842 scope.go:117] "RemoveContainer" containerID="f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263829 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489"} err="failed to get container status \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": rpc error: code = NotFound desc = could not find container \"f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489\": container with ID starting with f86777855e72110578e313fb73dc460db69e7873a4ec938b7b31eeaec80d6489 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.263854 4842 scope.go:117] "RemoveContainer" containerID="f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.264143 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0"} err="failed to get container status \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": rpc error: code = NotFound desc = could not find container \"f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0\": container with ID starting with f0ce953d348baf71860643eaa7225116a9afb17d5d8c09842b99ee3d1902bff0 not found: ID does not exist" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.266583 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.266998 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.267153 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.267945 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268088 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268141 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268296 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268429 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.268456 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.273349 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.274675 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.275291 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.275835 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.276497 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.289395 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"ceilometer-0\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.444925 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e9dbec6-ac74-4b3c-8c31-734a574dade3" path="/var/lib/kubelet/pods/3e9dbec6-ac74-4b3c-8c31-734a574dade3/volumes" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.486885 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.568268 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572069 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572097 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572146 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") pod \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\" (UID: \"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2\") " Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.572523 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs" (OuterVolumeSpecName: "logs") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.573308 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.578914 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc" (OuterVolumeSpecName: "kube-api-access-gngbc") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "kube-api-access-gngbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.602542 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.649088 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data" (OuterVolumeSpecName: "config-data") pod "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" (UID: "c80be6c0-a1f6-43d6-ba9d-9affaf8daff2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.675441 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gngbc\" (UniqueName: \"kubernetes.io/projected/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-kube-api-access-gngbc\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.675473 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:19 crc kubenswrapper[4842]: I0202 07:08:19.675482 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002575 4842 generic.go:334] "Generic (PLEG): container finished" podID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" exitCode=0 Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002637 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerDied","Data":"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1"} Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002658 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002679 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c80be6c0-a1f6-43d6-ba9d-9affaf8daff2","Type":"ContainerDied","Data":"1675d09f9cfa207274c23b46f1678c5e2c1bb07719525781e0d993852dd0e316"} Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.002694 4842 scope.go:117] "RemoveContainer" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.030375 4842 scope.go:117] "RemoveContainer" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.058133 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.072612 4842 scope.go:117] "RemoveContainer" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.073294 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1\": container with ID starting with 3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1 not found: ID does not exist" containerID="3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.073360 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1"} err="failed to get container status \"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1\": rpc error: code = NotFound desc = could not find container \"3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1\": container with ID starting with 3fb1e025904b8d9ff9892132492b878acb177e84b913bbf189ea1d283f0d92c1 not found: ID does not exist" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.073400 4842 scope.go:117] "RemoveContainer" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.076962 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5\": container with ID starting with 04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5 not found: ID does not exist" containerID="04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.077004 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5"} err="failed to get container status \"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5\": rpc error: code = NotFound desc = could not find container \"04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5\": container with ID starting with 04b4da4c7cdb199c83e91cbd927bc8dcd576a40d0a7ecd072203710a818e10c5 not found: ID does not exist" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.078859 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.090151 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.097523 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.098123 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098157 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" Feb 02 07:08:20 crc kubenswrapper[4842]: E0202 07:08:20.098179 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098191 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098561 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-log" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.098606 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" containerName="nova-api-api" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.100176 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106379 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106434 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106439 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.106489 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.186895 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.186956 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.186972 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.187254 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.187303 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.187355 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288363 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288403 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288478 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288494 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288519 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.288566 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.289827 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.293956 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.296165 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.296385 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.310042 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.310826 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"nova-api-0\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.458633 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.684281 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.707764 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:20 crc kubenswrapper[4842]: I0202 07:08:20.968885 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:20 crc kubenswrapper[4842]: W0202 07:08:20.970123 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4a4e099_0255_49f4_bcb4_7962af32cad2.slice/crio-05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85 WatchSource:0}: Error finding container 05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85: Status 404 returned error can't find the container with id 05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85 Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.013450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621"} Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.013490 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"dc072634ce1fdc7d7f270a2d47917083559fd131ffec946966f43f1f6581f8f4"} Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.015225 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerStarted","Data":"05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85"} Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.030318 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.293714 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.295799 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.300070 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.300651 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312051 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312101 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312162 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.312267 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.313186 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413700 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413766 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413806 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.413842 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.420724 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.421683 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.421694 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.430957 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"nova-cell1-cell-mapping-77gxn\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.444160 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80be6c0-a1f6-43d6-ba9d-9affaf8daff2" path="/var/lib/kubelet/pods/c80be6c0-a1f6-43d6-ba9d-9affaf8daff2/volumes" Feb 02 07:08:21 crc kubenswrapper[4842]: I0202 07:08:21.673128 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.024464 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1"} Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.026544 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerStarted","Data":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.026602 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerStarted","Data":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.043511 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.043497107 podStartE2EDuration="2.043497107s" podCreationTimestamp="2026-02-02 07:08:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:22.041888097 +0000 UTC m=+1327.419156029" watchObservedRunningTime="2026-02-02 07:08:22.043497107 +0000 UTC m=+1327.420765019" Feb 02 07:08:22 crc kubenswrapper[4842]: I0202 07:08:22.140185 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:08:22 crc kubenswrapper[4842]: W0202 07:08:22.142343 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38cfcc24_6854_414a_9d6c_4769e1366eb1.slice/crio-80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce WatchSource:0}: Error finding container 80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce: Status 404 returned error can't find the container with id 80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.039708 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef"} Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.043605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerStarted","Data":"999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c"} Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.043654 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerStarted","Data":"80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce"} Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.066301 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-77gxn" podStartSLOduration=2.066285682 podStartE2EDuration="2.066285682s" podCreationTimestamp="2026-02-02 07:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:23.064080668 +0000 UTC m=+1328.441348590" watchObservedRunningTime="2026-02-02 07:08:23.066285682 +0000 UTC m=+1328.443553594" Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.421339 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.525649 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:08:23 crc kubenswrapper[4842]: I0202 07:08:23.525890 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" containerID="cri-o://5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52" gracePeriod=10 Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.053870 4842 generic.go:334] "Generic (PLEG): container finished" podID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerID="5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52" exitCode=0 Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.053904 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerDied","Data":"5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52"} Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.054246 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" event={"ID":"9e447f46-c8cc-42f2-92e6-1465a9f407c6","Type":"ContainerDied","Data":"451377c79842f0376185bd4f8a1618a4b5a16afcc7be3c0724fb62e157fb3755"} Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.054282 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451377c79842f0376185bd4f8a1618a4b5a16afcc7be3c0724fb62e157fb3755" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.130744 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.270722 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271001 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271032 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271147 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271198 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.271288 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") pod \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\" (UID: \"9e447f46-c8cc-42f2-92e6-1465a9f407c6\") " Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.275053 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7" (OuterVolumeSpecName: "kube-api-access-55tx7") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "kube-api-access-55tx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.317920 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.324408 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.330659 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.342666 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.343122 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config" (OuterVolumeSpecName: "config") pod "9e447f46-c8cc-42f2-92e6-1465a9f407c6" (UID: "9e447f46-c8cc-42f2-92e6-1465a9f407c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373673 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373704 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373716 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373725 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373748 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e447f46-c8cc-42f2-92e6-1465a9f407c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:24 crc kubenswrapper[4842]: I0202 07:08:24.373756 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55tx7\" (UniqueName: \"kubernetes.io/projected/9e447f46-c8cc-42f2-92e6-1465a9f407c6-kube-api-access-55tx7\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.085852 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.085981 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerStarted","Data":"bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3"} Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.114090 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.2417673320000002 podStartE2EDuration="6.114073351s" podCreationTimestamp="2026-02-02 07:08:19 +0000 UTC" firstStartedPulling="2026-02-02 07:08:20.080141552 +0000 UTC m=+1325.457409504" lastFinishedPulling="2026-02-02 07:08:23.952447611 +0000 UTC m=+1329.329715523" observedRunningTime="2026-02-02 07:08:25.107762204 +0000 UTC m=+1330.485030126" watchObservedRunningTime="2026-02-02 07:08:25.114073351 +0000 UTC m=+1330.491341253" Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.132981 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.140936 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557bbc7df7-8rcz9"] Feb 02 07:08:25 crc kubenswrapper[4842]: I0202 07:08:25.447981 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" path="/var/lib/kubelet/pods/9e447f46-c8cc-42f2-92e6-1465a9f407c6/volumes" Feb 02 07:08:26 crc kubenswrapper[4842]: I0202 07:08:26.097649 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 02 07:08:28 crc kubenswrapper[4842]: I0202 07:08:28.121773 4842 generic.go:334] "Generic (PLEG): container finished" podID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerID="999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c" exitCode=0 Feb 02 07:08:28 crc kubenswrapper[4842]: I0202 07:08:28.122137 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerDied","Data":"999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c"} Feb 02 07:08:28 crc kubenswrapper[4842]: I0202 07:08:28.917580 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-557bbc7df7-8rcz9" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: i/o timeout" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.458264 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.598066 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.598649 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.598911 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.599134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") pod \"38cfcc24-6854-414a-9d6c-4769e1366eb1\" (UID: \"38cfcc24-6854-414a-9d6c-4769e1366eb1\") " Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.606322 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l" (OuterVolumeSpecName: "kube-api-access-tqs5l") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "kube-api-access-tqs5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.607483 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts" (OuterVolumeSpecName: "scripts") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.647693 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.656530 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data" (OuterVolumeSpecName: "config-data") pod "38cfcc24-6854-414a-9d6c-4769e1366eb1" (UID: "38cfcc24-6854-414a-9d6c-4769e1366eb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701890 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqs5l\" (UniqueName: \"kubernetes.io/projected/38cfcc24-6854-414a-9d6c-4769e1366eb1-kube-api-access-tqs5l\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701932 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701946 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:29 crc kubenswrapper[4842]: I0202 07:08:29.701955 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38cfcc24-6854-414a-9d6c-4769e1366eb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.147340 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-77gxn" event={"ID":"38cfcc24-6854-414a-9d6c-4769e1366eb1","Type":"ContainerDied","Data":"80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce"} Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.147396 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80b83a98f26a6e2e866312dd7c5fab8dc991b4d5d03904f45c846c25a98dd4ce" Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.147430 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-77gxn" Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.368676 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.369268 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" containerID="cri-o://c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.369306 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" containerID="cri-o://89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.385388 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.385573 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" containerID="cri-o://fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.444470 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.446131 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" containerID="cri-o://582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa" gracePeriod=30 Feb 02 07:08:30 crc kubenswrapper[4842]: I0202 07:08:30.446370 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" containerID="cri-o://e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1" gracePeriod=30 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.149933 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156365 4842 generic.go:334] "Generic (PLEG): container finished" podID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" exitCode=0 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156406 4842 generic.go:334] "Generic (PLEG): container finished" podID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" exitCode=143 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156428 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156452 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerDied","Data":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerDied","Data":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156509 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b4a4e099-0255-49f4-bcb4-7962af32cad2","Type":"ContainerDied","Data":"05bed553a9d1167fc6969d8d0d674b6850e5b78bc317f359dad785df3a643e85"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.156525 4842 scope.go:117] "RemoveContainer" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.158684 4842 generic.go:334] "Generic (PLEG): container finished" podID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerID="e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1" exitCode=143 Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.158711 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerDied","Data":"e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1"} Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.195627 4842 scope.go:117] "RemoveContainer" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.216978 4842 scope.go:117] "RemoveContainer" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.217494 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": container with ID starting with 89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452 not found: ID does not exist" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.217538 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} err="failed to get container status \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": rpc error: code = NotFound desc = could not find container \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": container with ID starting with 89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.217571 4842 scope.go:117] "RemoveContainer" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.218130 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": container with ID starting with c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4 not found: ID does not exist" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218156 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} err="failed to get container status \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": rpc error: code = NotFound desc = could not find container \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": container with ID starting with c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218171 4842 scope.go:117] "RemoveContainer" containerID="89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218546 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452"} err="failed to get container status \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": rpc error: code = NotFound desc = could not find container \"89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452\": container with ID starting with 89ff40cb4539915cb06a0bb724a67a4032f8a76698ee5eaf19737a5a65488452 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218564 4842 scope.go:117] "RemoveContainer" containerID="c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.218817 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4"} err="failed to get container status \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": rpc error: code = NotFound desc = could not find container \"c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4\": container with ID starting with c9d2bc9e99757d3bdd11596f02c67e0feeba8f1ce1d8460a778376411014d3c4 not found: ID does not exist" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.303025 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.304322 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.305709 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.305795 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.331802 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.331906 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.331987 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332018 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332040 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332096 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") pod \"b4a4e099-0255-49f4-bcb4-7962af32cad2\" (UID: \"b4a4e099-0255-49f4-bcb4-7962af32cad2\") " Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.332838 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs" (OuterVolumeSpecName: "logs") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.338977 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6" (OuterVolumeSpecName: "kube-api-access-bc7g6") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "kube-api-access-bc7g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.380014 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data" (OuterVolumeSpecName: "config-data") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.382871 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.395244 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.402065 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b4a4e099-0255-49f4-bcb4-7962af32cad2" (UID: "b4a4e099-0255-49f4-bcb4-7962af32cad2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434253 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4a4e099-0255-49f4-bcb4-7962af32cad2-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434290 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434304 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7g6\" (UniqueName: \"kubernetes.io/projected/b4a4e099-0255-49f4-bcb4-7962af32cad2-kube-api-access-bc7g6\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434318 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434330 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.434343 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4a4e099-0255-49f4-bcb4-7962af32cad2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.498743 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.521027 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.526849 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527352 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527372 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527397 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527409 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527425 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527433 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527451 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerName="nova-manage" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527460 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerName="nova-manage" Feb 02 07:08:31 crc kubenswrapper[4842]: E0202 07:08:31.527480 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="init" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527488 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="init" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527718 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-log" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527744 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" containerName="nova-manage" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527758 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" containerName="nova-api-api" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.527774 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e447f46-c8cc-42f2-92e6-1465a9f407c6" containerName="dnsmasq-dns" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.528955 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.533460 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.539760 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.539944 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.540159 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.640918 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641191 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641432 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641587 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.641797 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.743771 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744072 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744188 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744347 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744445 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.744514 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.745290 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.748501 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.748521 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.753135 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.753894 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.772919 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"nova-api-0\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " pod="openstack/nova-api-0" Feb 02 07:08:31 crc kubenswrapper[4842]: I0202 07:08:31.874575 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:08:32 crc kubenswrapper[4842]: I0202 07:08:32.400417 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.186641 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerStarted","Data":"bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00"} Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.188066 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerStarted","Data":"1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc"} Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.188196 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerStarted","Data":"22718259310cd947182a28b08951d593ee087b709a27af6ee23d9b940e93c5ac"} Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.218517 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.2184964 podStartE2EDuration="2.2184964s" podCreationTimestamp="2026-02-02 07:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:33.213077025 +0000 UTC m=+1338.590344937" watchObservedRunningTime="2026-02-02 07:08:33.2184964 +0000 UTC m=+1338.595764312" Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.451327 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a4e099-0255-49f4-bcb4-7962af32cad2" path="/var/lib/kubelet/pods/b4a4e099-0255-49f4-bcb4-7962af32cad2/volumes" Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.910851 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:49254->10.217.0.193:8775: read: connection reset by peer" Feb 02 07:08:33 crc kubenswrapper[4842]: I0202 07:08:33.910851 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:49270->10.217.0.193:8775: read: connection reset by peer" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.196337 4842 generic.go:334] "Generic (PLEG): container finished" podID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerID="582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa" exitCode=0 Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.196453 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerDied","Data":"582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa"} Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.417669 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.601786 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.601933 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.602889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.602926 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.602955 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") pod \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\" (UID: \"ec1cba88-8c9f-48bb-91fc-fc7675bba29a\") " Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.603324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs" (OuterVolumeSpecName: "logs") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.603701 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.609365 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg" (OuterVolumeSpecName: "kube-api-access-8xwxg") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "kube-api-access-8xwxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.633467 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data" (OuterVolumeSpecName: "config-data") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.638704 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.676113 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ec1cba88-8c9f-48bb-91fc-fc7675bba29a" (UID: "ec1cba88-8c9f-48bb-91fc-fc7675bba29a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705522 4842 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705556 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705565 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xwxg\" (UniqueName: \"kubernetes.io/projected/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-kube-api-access-8xwxg\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:34 crc kubenswrapper[4842]: I0202 07:08:34.705575 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1cba88-8c9f-48bb-91fc-fc7675bba29a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.207793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec1cba88-8c9f-48bb-91fc-fc7675bba29a","Type":"ContainerDied","Data":"a1edffd6229fcfd445e770ea5551a81134a2ceed05cbf411c15f38de72a6bfa9"} Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.207841 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.207860 4842 scope.go:117] "RemoveContainer" containerID="582a5dd3542b08360b5bb369e0ddd50ae9403ee0b66668c8d7e065b109baa6aa" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.209666 4842 generic.go:334] "Generic (PLEG): container finished" podID="46ba09a5-eecd-46b6-9182-96444c6de570" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" exitCode=0 Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.210878 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerDied","Data":"fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c"} Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.240970 4842 scope.go:117] "RemoveContainer" containerID="e9568e435718a90b20e25e9432be05f2885e29c1c8378fa536932ac94aabd5f1" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.271054 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.282447 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.293727 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: E0202 07:08:35.294321 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294335 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" Feb 02 07:08:35 crc kubenswrapper[4842]: E0202 07:08:35.294355 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294362 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294605 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-log" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.294628 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" containerName="nova-metadata-metadata" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.295803 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.298066 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.298262 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.303773 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421558 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421859 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421913 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421949 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.421979 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.445937 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1cba88-8c9f-48bb-91fc-fc7675bba29a" path="/var/lib/kubelet/pods/ec1cba88-8c9f-48bb-91fc-fc7675bba29a/volumes" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.478558 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.523820 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.523900 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.523942 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.524009 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.524092 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.524405 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.529557 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.529606 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.532849 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.544207 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"nova-metadata-0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.609758 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.624797 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") pod \"46ba09a5-eecd-46b6-9182-96444c6de570\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.624905 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") pod \"46ba09a5-eecd-46b6-9182-96444c6de570\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.624977 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") pod \"46ba09a5-eecd-46b6-9182-96444c6de570\" (UID: \"46ba09a5-eecd-46b6-9182-96444c6de570\") " Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.628603 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28" (OuterVolumeSpecName: "kube-api-access-jtj28") pod "46ba09a5-eecd-46b6-9182-96444c6de570" (UID: "46ba09a5-eecd-46b6-9182-96444c6de570"). InnerVolumeSpecName "kube-api-access-jtj28". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.681507 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46ba09a5-eecd-46b6-9182-96444c6de570" (UID: "46ba09a5-eecd-46b6-9182-96444c6de570"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.681525 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data" (OuterVolumeSpecName: "config-data") pod "46ba09a5-eecd-46b6-9182-96444c6de570" (UID: "46ba09a5-eecd-46b6-9182-96444c6de570"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.728680 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.729573 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtj28\" (UniqueName: \"kubernetes.io/projected/46ba09a5-eecd-46b6-9182-96444c6de570-kube-api-access-jtj28\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:35 crc kubenswrapper[4842]: I0202 07:08:35.729607 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ba09a5-eecd-46b6-9182-96444c6de570-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.081549 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.224358 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"46ba09a5-eecd-46b6-9182-96444c6de570","Type":"ContainerDied","Data":"968efa1fb3cd3082b0218178700a10a30e92c9574cb73ef9bff028ccdf092975"} Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.224440 4842 scope.go:117] "RemoveContainer" containerID="fafeb3817a31a7a0fb62f345433970bfd99201eb46a5c80f3211d7f7e964cd2c" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.224474 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.226283 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerStarted","Data":"97d85497136bca54efa2ce8c8d3033b9016ab0e739dcabcdf04a8ad306a7c1b7"} Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.289619 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.307910 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.367781 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: E0202 07:08:36.368531 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.368556 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.368743 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" containerName="nova-scheduler-scheduler" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.369525 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.371998 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.389772 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.542807 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.542990 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.543188 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.644728 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.644910 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.644951 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.649909 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.649988 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.671539 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"nova-scheduler-0\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " pod="openstack/nova-scheduler-0" Feb 02 07:08:36 crc kubenswrapper[4842]: I0202 07:08:36.720573 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.239699 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.242974 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerStarted","Data":"c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab"} Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.243039 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerStarted","Data":"415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd"} Feb 02 07:08:37 crc kubenswrapper[4842]: W0202 07:08:37.243917 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f94c60e_a4fc_4b7d_96cd_367d46a731c4.slice/crio-95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f WatchSource:0}: Error finding container 95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f: Status 404 returned error can't find the container with id 95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.276107 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.276081413 podStartE2EDuration="2.276081413s" podCreationTimestamp="2026-02-02 07:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:37.27235565 +0000 UTC m=+1342.649623572" watchObservedRunningTime="2026-02-02 07:08:37.276081413 +0000 UTC m=+1342.653349365" Feb 02 07:08:37 crc kubenswrapper[4842]: I0202 07:08:37.459832 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ba09a5-eecd-46b6-9182-96444c6de570" path="/var/lib/kubelet/pods/46ba09a5-eecd-46b6-9182-96444c6de570/volumes" Feb 02 07:08:38 crc kubenswrapper[4842]: I0202 07:08:38.259876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerStarted","Data":"aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc"} Feb 02 07:08:38 crc kubenswrapper[4842]: I0202 07:08:38.259948 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerStarted","Data":"95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f"} Feb 02 07:08:38 crc kubenswrapper[4842]: I0202 07:08:38.292153 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.292117472 podStartE2EDuration="2.292117472s" podCreationTimestamp="2026-02-02 07:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:08:38.281462728 +0000 UTC m=+1343.658730670" watchObservedRunningTime="2026-02-02 07:08:38.292117472 +0000 UTC m=+1343.669385394" Feb 02 07:08:40 crc kubenswrapper[4842]: I0202 07:08:40.609974 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:08:40 crc kubenswrapper[4842]: I0202 07:08:40.610533 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 02 07:08:41 crc kubenswrapper[4842]: I0202 07:08:41.721704 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 02 07:08:41 crc kubenswrapper[4842]: I0202 07:08:41.875824 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:41 crc kubenswrapper[4842]: I0202 07:08:41.875884 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 02 07:08:42 crc kubenswrapper[4842]: I0202 07:08:42.889344 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:42 crc kubenswrapper[4842]: I0202 07:08:42.889453 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:45 crc kubenswrapper[4842]: I0202 07:08:45.610620 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:08:45 crc kubenswrapper[4842]: I0202 07:08:45.611039 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.622444 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.622460 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.720879 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 02 07:08:46 crc kubenswrapper[4842]: I0202 07:08:46.765873 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 02 07:08:47 crc kubenswrapper[4842]: I0202 07:08:47.422096 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 02 07:08:49 crc kubenswrapper[4842]: I0202 07:08:49.592136 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.884970 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.886135 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.887869 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 02 07:08:51 crc kubenswrapper[4842]: I0202 07:08:51.896018 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:52 crc kubenswrapper[4842]: I0202 07:08:52.426726 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 02 07:08:52 crc kubenswrapper[4842]: I0202 07:08:52.442153 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 02 07:08:55 crc kubenswrapper[4842]: I0202 07:08:55.617020 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:55 crc kubenswrapper[4842]: I0202 07:08:55.620978 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 02 07:08:55 crc kubenswrapper[4842]: I0202 07:08:55.623769 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:08:56 crc kubenswrapper[4842]: I0202 07:08:56.497589 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.796581 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.806098 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.819399 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.828485 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.829747 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.829895 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.933376 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.934164 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.934040 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.954485 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.954729 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" containerID="cri-o://092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" gracePeriod=30 Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.955093 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" containerID="cri-o://bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" gracePeriod=30 Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.970322 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.971830 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.989492 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.989763 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" containerID="cri-o://bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" gracePeriod=30 Feb 02 07:09:13 crc kubenswrapper[4842]: I0202 07:09:13.989884 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" containerID="cri-o://35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" gracePeriod=30 Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.016881 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.018254 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.021929 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"root-account-create-update-kl9p2\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038621 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038831 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038854 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038869 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038893 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038931 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038980 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.038997 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.039028 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.039049 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.044459 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.060481 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.096255 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.098989 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.106328 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-h2lm5"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.116682 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143353 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143420 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143449 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143466 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143498 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143522 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143575 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143603 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143625 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143641 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143664 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.143700 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.153325 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.153960 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.164966 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.165074 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.168179 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.178984 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.180377 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.186482 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.193077 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.225651 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.248366 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.248405 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.249435 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.256967 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"barbican-worker-5cf958d9d9-vvzkc\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.271291 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.291270 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.292442 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.310852 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.311094 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.376438 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.386575 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.386696 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.388815 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"nova-api-89ff-account-create-update-fbkfk\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.488325 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.491361 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.491439 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.494619 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.547780 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-89ff-account-create-update-pb4bw"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.562673 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"nova-cell1-17c9-account-create-update-6xs6n\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.579345 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-hm58m"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.611323 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.636916 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"barbican-keystone-listener-687b99dfd8-skrq6\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.653235 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.653464 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" containerID="cri-o://7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316" gracePeriod=2 Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.668645 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.677113 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.710403 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.855276 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:14 crc kubenswrapper[4842]: E0202 07:09:14.855941 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.855953 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.856120 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="590d1088-e964-43a6-b879-01c8b83d4147" containerName="openstackclient" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.857071 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.876070 4842 generic.go:334] "Generic (PLEG): container finished" podID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" exitCode=143 Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.876113 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerDied","Data":"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070"} Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.879567 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.889632 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925528 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925598 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925635 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925670 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925688 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.925726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.927961 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.929129 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.934557 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.975471 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.976769 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:14 crc kubenswrapper[4842]: I0202 07:09:14.980314 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.014297 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027024 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027094 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027139 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027196 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027229 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027251 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027280 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027308 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.027324 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.029634 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.029750 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.029828 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:15.529808355 +0000 UTC m=+1380.907076267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.036397 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.042768 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.044419 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.047765 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.051763 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.054892 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.055790 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.093631 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.105573 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.105670 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:15.605649364 +0000 UTC m=+1380.982917276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.113075 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.129788 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.129934 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.130884 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.131487 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.131597 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.131638 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:15.631626138 +0000 UTC m=+1381.008894050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.131841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.139069 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.143150 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.143204 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.151725 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2348-account-create-update-l9hwl"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.160391 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.161555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"nova-cell0-7f00-account-create-update-wfvs9\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.161621 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.174366 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.186621 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.232545 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.233822 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.239035 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246365 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246707 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246741 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246879 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.246911 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.247841 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.247896 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.248783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.261417 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.336020 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"glance-2348-account-create-update-j8g5r\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.346351 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.346795 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351020 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351056 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351169 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.351302 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.352003 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.363503 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"cinder-716d-account-create-update-x4f2v\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.375368 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"neutron-bfdd-account-create-update-z7blt\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.379558 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.395971 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-llc96"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.425150 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.425821 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" containerID="cri-o://12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573" gracePeriod=300 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.464553 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.464867 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.465727 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.464872 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.521637 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.543893 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"placement-85ce-account-create-update-szhp5\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.571137 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bba199-2794-4828-9a54-e1aac49fb223" path="/var/lib/kubelet/pods/52bba199-2794-4828-9a54-e1aac49fb223/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.572010 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668f221e-e491-4ec6-9f40-82dd1afc3ac8" path="/var/lib/kubelet/pods/668f221e-e491-4ec6-9f40-82dd1afc3ac8/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.573357 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9d15d01-9c12-4b4f-9cec-037a1d21fab1" path="/var/lib/kubelet/pods/a9d15d01-9c12-4b4f-9cec-037a1d21fab1/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.573937 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cbe107-ad1a-47aa-9b91-4a08c8b712fb" path="/var/lib/kubelet/pods/e0cbe107-ad1a-47aa-9b91-4a08c8b712fb/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.575197 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef83800c-79dc-4cfa-9f7c-194a44995d12" path="/var/lib/kubelet/pods/ef83800c-79dc-4cfa-9f7c-194a44995d12/volumes" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.577767 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.580766 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.580850 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.582764 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.583275 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.583824 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.583879 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.583863617 +0000 UTC m=+1381.961131529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.597116 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.602176 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" containerID="cri-o://c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f" gracePeriod=300 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.608077 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.645562 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.668273 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.683727 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-716d-account-create-update-ft5kt"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.684698 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.684779 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.684854 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.685089 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.685160 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.685140726 +0000 UTC m=+1382.062408638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.694193 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: E0202 07:09:15.694290 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.694269959 +0000 UTC m=+1382.071537871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.725457 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bfdd-account-create-update-rws4k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.743763 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.757980 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.773331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.785701 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-77gxn"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.787670 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.788003 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.788856 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.800318 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85ce-account-create-update-rxmcp"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.813830 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"barbican-8e42-account-create-update-pssf7\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.813976 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8e42-account-create-update-mtd79"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.829002 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.837361 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.859802 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-phj68"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.866862 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-d648k"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.876090 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.885487 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.885798 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" containerID="cri-o://6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" gracePeriod=30 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.886193 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" containerID="cri-o://e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0" gracePeriod=30 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.895737 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7qxb9"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.923396 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.926015 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.928100 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2ddsf"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932410 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bff6dd37-52b7-41b4-bc15-4f6436cdabc7/ovsdbserver-nb/0.log" Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932450 4842 generic.go:334] "Generic (PLEG): container finished" podID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerID="12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573" exitCode=2 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932466 4842 generic.go:334] "Generic (PLEG): container finished" podID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerID="c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f" exitCode=143 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932529 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerDied","Data":"12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.932554 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerDied","Data":"c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.936365 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.944497 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rpkx6"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.961978 4842 generic.go:334] "Generic (PLEG): container finished" podID="115a51a9-6125-46e1-a960-a66cb9957d38" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" exitCode=0 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.962045 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerDied","Data":"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.969236 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerStarted","Data":"d69c45eb45e674be84418f12982b88cbb7cb13f89d733e29e26157326878116c"} Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.971339 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.971524 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-4glck" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" containerID="cri-o://a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd" gracePeriod=30 Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.978030 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:09:15 crc kubenswrapper[4842]: I0202 07:09:15.987716 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:15.994875 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:15.995431 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" containerID="cri-o://c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266" gracePeriod=300 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.002838 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.030502 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerStarted","Data":"c436c98ac030592508317571235d4b580f2fca45d60bf44a940ecdb59f089266"} Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.040795 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.041139 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" containerID="cri-o://b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3" gracePeriod=10 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.060228 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.087087 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-sjstk"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.144388 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" containerID="cri-o://6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" gracePeriod=300 Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.190880 4842 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-sgwrm" message=< Feb 02 07:09:16 crc kubenswrapper[4842]: Exiting ovn-controller (1) [ OK ] Feb 02 07:09:16 crc kubenswrapper[4842]: > Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.190938 4842 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-sgwrm" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" containerID="cri-o://42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.190972 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-sgwrm" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" containerID="cri-o://42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.198010 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.200477 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" containerID="cri-o://c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.200610 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" containerID="cri-o://415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.238730 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.263700 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.285202 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.285546 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" containerID="cri-o://1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.290521 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" containerID="cri-o://bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.333577 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.333896 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:16.833875717 +0000 UTC m=+1382.211143629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.368272 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.403690 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.419597 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-kbdxw"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.449365 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.450298 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5b5c67fdbd-zsx96" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" containerID="cri-o://6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.451264 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5b5c67fdbd-zsx96" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" containerID="cri-o://c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.460520 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.460753 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" containerID="cri-o://baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.460877 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" containerID="cri-o://50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.477736 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.513236 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.527757 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jph4l"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.569751 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.581164 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.581403 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" containerID="cri-o://c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.581814 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" containerID="cri-o://72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.594826 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.595045 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6684555597-gjtgz" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" containerID="cri-o://679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.595458 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6684555597-gjtgz" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" containerID="cri-o://69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.603331 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.616417 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-hhd7d"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.641501 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.641718 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.641766 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:18.641749666 +0000 UTC m=+1384.019017568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.669496 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670398 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" containerID="cri-o://496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670487 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" containerID="cri-o://78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670463 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" containerID="cri-o://5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670672 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" containerID="cri-o://c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670642 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" containerID="cri-o://98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670651 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" containerID="cri-o://84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670733 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" containerID="cri-o://11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670750 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" containerID="cri-o://a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670783 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" containerID="cri-o://419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670796 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" containerID="cri-o://3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670812 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" containerID="cri-o://a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670851 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" containerID="cri-o://1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670890 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" containerID="cri-o://81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670629 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" containerID="cri-o://94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.670892 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" containerID="cri-o://0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.743503 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.743788 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.743839 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:18.743822425 +0000 UTC m=+1384.121090337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.747207 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.747278 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:18.747263313 +0000 UTC m=+1384.124531225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.755137 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.764029 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:16 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: if [ -n "nova_api" ]; then Feb 02 07:09:16 crc kubenswrapper[4842]: GRANT_DATABASE="nova_api" Feb 02 07:09:16 crc kubenswrapper[4842]: else Feb 02 07:09:16 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:16 crc kubenswrapper[4842]: fi Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:16 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:16 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:16 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:16 crc kubenswrapper[4842]: # support updates Feb 02 07:09:16 crc kubenswrapper[4842]: Feb 02 07:09:16 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.768864 4842 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Feb 02 07:09:16 crc kubenswrapper[4842]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 07:09:16 crc kubenswrapper[4842]: + source /usr/local/bin/container-scripts/functions Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNBridge=br-int Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNRemote=tcp:localhost:6642 Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNEncapType=geneve Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNAvailabilityZones= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ EnableChassisAsGateway=true Feb 02 07:09:16 crc kubenswrapper[4842]: ++ PhysicalNetworks= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNHostName= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 07:09:16 crc kubenswrapper[4842]: ++ ovs_dir=/var/lib/openvswitch Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 07:09:16 crc kubenswrapper[4842]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + sleep 0.5 Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + cleanup_ovsdb_server_semaphore Feb 02 07:09:16 crc kubenswrapper[4842]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 07:09:16 crc kubenswrapper[4842]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-vctt8" message=< Feb 02 07:09:16 crc kubenswrapper[4842]: Exiting ovsdb-server (5) [ OK ] Feb 02 07:09:16 crc kubenswrapper[4842]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 07:09:16 crc kubenswrapper[4842]: + source /usr/local/bin/container-scripts/functions Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNBridge=br-int Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNRemote=tcp:localhost:6642 Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNEncapType=geneve Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNAvailabilityZones= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ EnableChassisAsGateway=true Feb 02 07:09:16 crc kubenswrapper[4842]: ++ PhysicalNetworks= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNHostName= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 07:09:16 crc kubenswrapper[4842]: ++ ovs_dir=/var/lib/openvswitch Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 07:09:16 crc kubenswrapper[4842]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + sleep 0.5 Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + cleanup_ovsdb_server_semaphore Feb 02 07:09:16 crc kubenswrapper[4842]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 07:09:16 crc kubenswrapper[4842]: > Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.768905 4842 kuberuntime_container.go:691] "PreStop hook failed" err=< Feb 02 07:09:16 crc kubenswrapper[4842]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Feb 02 07:09:16 crc kubenswrapper[4842]: + source /usr/local/bin/container-scripts/functions Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNBridge=br-int Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNRemote=tcp:localhost:6642 Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNEncapType=geneve Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNAvailabilityZones= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ EnableChassisAsGateway=true Feb 02 07:09:16 crc kubenswrapper[4842]: ++ PhysicalNetworks= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ OVNHostName= Feb 02 07:09:16 crc kubenswrapper[4842]: ++ DB_FILE=/etc/openvswitch/conf.db Feb 02 07:09:16 crc kubenswrapper[4842]: ++ ovs_dir=/var/lib/openvswitch Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Feb 02 07:09:16 crc kubenswrapper[4842]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Feb 02 07:09:16 crc kubenswrapper[4842]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + sleep 0.5 Feb 02 07:09:16 crc kubenswrapper[4842]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Feb 02 07:09:16 crc kubenswrapper[4842]: + cleanup_ovsdb_server_semaphore Feb 02 07:09:16 crc kubenswrapper[4842]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Feb 02 07:09:16 crc kubenswrapper[4842]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Feb 02 07:09:16 crc kubenswrapper[4842]: > pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" containerID="cri-o://a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.768936 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" containerID="cri-o://a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.769198 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-89ff-account-create-update-fbkfk" podUID="8dad4bc1-b1ae-436c-925e-986d33b77e51" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.779407 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.791711 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" containerID="cri-o://3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.805720 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vsjtz"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.820418 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:16 crc kubenswrapper[4842]: W0202 07:09:16.830929 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod748756c2_ee60_42ce_835e_bfaa7007d7ac.slice/crio-09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e WatchSource:0}: Error finding container 09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e: Status 404 returned error can't find the container with id 09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.830986 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.839133 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.840780 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" containerID="cri-o://6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.845280 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.845664 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dg9pd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850322 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850392 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850462 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850521 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850553 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.850636 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") pod \"115a51a9-6125-46e1-a960-a66cb9957d38\" (UID: \"115a51a9-6125-46e1-a960-a66cb9957d38\") " Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.852788 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.852865 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: E0202 07:09:16.852916 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:17.852900863 +0000 UTC m=+1383.230168775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.852970 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115a51a9-6125-46e1-a960-a66cb9957d38-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.853201 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.860156 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.860387 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1" gracePeriod=30 Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.864503 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts" (OuterVolumeSpecName: "scripts") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.866275 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk" (OuterVolumeSpecName: "kube-api-access-wmstk") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "kube-api-access-wmstk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.867830 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.868625 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.876870 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-p28sd"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.896070 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.928855 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:16 crc kubenswrapper[4842]: I0202 07:09:16.946380 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.956396 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmstk\" (UniqueName: \"kubernetes.io/projected/115a51a9-6125-46e1-a960-a66cb9957d38-kube-api-access-wmstk\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.956421 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.956433 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.959115 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.971796 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.979604 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8p487"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.981866 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bff6dd37-52b7-41b4-bc15-4f6436cdabc7/ovsdbserver-nb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.981931 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:16.987041 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-79v8r"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.021642 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.026655 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:17 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: if [ -n "nova_cell1" ]; then Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="nova_cell1" Feb 02 07:09:17 crc kubenswrapper[4842]: else Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:17 crc kubenswrapper[4842]: fi Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:17 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:17 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:17 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:17 crc kubenswrapper[4842]: # support updates Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.028827 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" podUID="88d00cbf-6e28-4be5-abc2-6c77e76de81e" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.052777 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.055762 4842 generic.go:334] "Generic (PLEG): container finished" podID="34f55116-a518-4f21-8816-6f8232a6f68d" containerID="c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.055815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerDied","Data":"c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057128 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057190 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057769 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057912 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.057953 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.058000 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.058020 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") pod \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\" (UID: \"bff6dd37-52b7-41b4-bc15-4f6436cdabc7\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.060389 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config" (OuterVolumeSpecName: "config") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.067364 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8rdwx"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.067413 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.067859 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts" (OuterVolumeSpecName: "scripts") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.075167 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.076855 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.078644 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.087365 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.087624 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-57cc9f4749-jxzrq" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" containerID="cri-o://2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.088030 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-57cc9f4749-jxzrq" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" containerID="cri-o://36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.089745 4842 generic.go:334] "Generic (PLEG): container finished" podID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerID="baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.089827 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerDied","Data":"baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.099546 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n" (OuterVolumeSpecName: "kube-api-access-pxl6n") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "kube-api-access-pxl6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.103367 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.111133 4842 generic.go:334] "Generic (PLEG): container finished" podID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerID="1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.111209 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerDied","Data":"1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.118456 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-fbkfk" event={"ID":"8dad4bc1-b1ae-436c-925e-986d33b77e51","Type":"ContainerStarted","Data":"19b5b9e6138f019e100c7874a7e9ab2b0be50a7d46a7fd240461e516fb3462c0"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.135381 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.135597 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" containerID="cri-o://d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.135726 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" containerID="cri-o://83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.142811 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.151028 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config-data kube-api-access-h5vs6], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/barbican-api-654fdfd6b6-nrxvh" podUID="72b63114-a275-4e32-9ad4-9f59e22151b3" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159678 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159696 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159706 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxl6n\" (UniqueName: \"kubernetes.io/projected/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-kube-api-access-pxl6n\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159716 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.159734 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.165093 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" containerID="cri-o://384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" gracePeriod=604800 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.175498 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerStarted","Data":"09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.176886 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.191503 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.191769 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" containerID="cri-o://5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.192147 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" containerID="cri-o://aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.206039 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.219055 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:17 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: if [ -n "nova_api" ]; then Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="nova_api" Feb 02 07:09:17 crc kubenswrapper[4842]: else Feb 02 07:09:17 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:17 crc kubenswrapper[4842]: fi Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:17 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:17 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:17 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:17 crc kubenswrapper[4842]: # support updates Feb 02 07:09:17 crc kubenswrapper[4842]: Feb 02 07:09:17 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.221644 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-89ff-account-create-update-fbkfk" podUID="8dad4bc1-b1ae-436c-925e-986d33b77e51" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.253886 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.254060 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" containerID="cri-o://aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.263708 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.281795 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.300223 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302646 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302674 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302681 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302688 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302703 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302710 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302716 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302722 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302729 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302737 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302747 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302830 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302863 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302874 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302888 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302902 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302934 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302946 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302955 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302962 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302974 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.302985 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.307771 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.307958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" containerID="cri-o://b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.310243 4842 generic.go:334] "Generic (PLEG): container finished" podID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerID="e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0" exitCode=2 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.310285 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerDied","Data":"e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.311745 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.312880 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" event={"ID":"88d00cbf-6e28-4be5-abc2-6c77e76de81e","Type":"ContainerStarted","Data":"595b44b024cc413350c4c52a2edd391699f6565dcef71575de95c9a8d45985fb"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.316504 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerStarted","Data":"04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.317267 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pnj4n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.326275 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.326502 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" containerID="cri-o://75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.331848 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.336315 4842 generic.go:334] "Generic (PLEG): container finished" podID="115a51a9-6125-46e1-a960-a66cb9957d38" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.336449 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.338278 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6htfz"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342809 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerDied","Data":"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342842 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115a51a9-6125-46e1-a960-a66cb9957d38","Type":"ContainerDied","Data":"d9adaa71516bc7f37ff65b80add9138abcfd4cb747d204e8aa686e59e5b9af28"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342853 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.342880 4842 scope.go:117] "RemoveContainer" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.347423 4842 generic.go:334] "Generic (PLEG): container finished" podID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerID="6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.347478 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerDied","Data":"6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.349354 4842 generic.go:334] "Generic (PLEG): container finished" podID="590d1088-e964-43a6-b879-01c8b83d4147" containerID="7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316" exitCode=137 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.350883 4842 generic.go:334] "Generic (PLEG): container finished" podID="82827ec9-ac05-41ab-988c-99083ccdb949" containerID="b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.350917 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerDied","Data":"b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.352808 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.359466 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.362039 4842 generic.go:334] "Generic (PLEG): container finished" podID="953bf671-ca79-4208-9bab-672dc079dd82" containerID="69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.362104 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerDied","Data":"69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.369662 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.369690 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.375463 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.375879 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.382022 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" cmd=["/usr/bin/pidof","ovsdb-server"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.382097 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.382499 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384774 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a31583c1-5fde-4763-a889-7257255fa217/ovsdbserver-sb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384807 4842 generic.go:334] "Generic (PLEG): container finished" podID="a31583c1-5fde-4763-a889-7257255fa217" containerID="c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266" exitCode=2 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384822 4842 generic.go:334] "Generic (PLEG): container finished" podID="a31583c1-5fde-4763-a889-7257255fa217" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerDied","Data":"c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.384879 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerDied","Data":"6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.389522 4842 generic.go:334] "Generic (PLEG): container finished" podID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerID="415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd" exitCode=143 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.389568 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerDied","Data":"415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.390637 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data" (OuterVolumeSpecName: "config-data") pod "115a51a9-6125-46e1-a960-a66cb9957d38" (UID: "115a51a9-6125-46e1-a960-a66cb9957d38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.410715 4842 generic.go:334] "Generic (PLEG): container finished" podID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerID="42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.410790 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerDied","Data":"42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.417258 4842 generic.go:334] "Generic (PLEG): container finished" podID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" exitCode=0 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.417311 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.421352 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.427372 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_bff6dd37-52b7-41b4-bc15-4f6436cdabc7/ovsdbserver-nb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.427492 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"bff6dd37-52b7-41b4-bc15-4f6436cdabc7","Type":"ContainerDied","Data":"0b86eb955efed6c0beae4754f7a259bd87ec4d6377bfa3532f73d18514ea5e3d"} Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.427577 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.435252 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4glck_a768c72b-df6d-463e-b085-996d7b910985/openstack-network-exporter/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.435287 4842 generic.go:334] "Generic (PLEG): container finished" podID="a768c72b-df6d-463e-b085-996d7b910985" containerID="a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd" exitCode=2 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.454076 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fb5e79-8dd5-46ae-b8dd-6944cc810350" path="/var/lib/kubelet/pods/15fb5e79-8dd5-46ae-b8dd-6944cc810350/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.457668 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c72b5c-16bb-4404-8c00-6b37ed7d9b47" path="/var/lib/kubelet/pods/27c72b5c-16bb-4404-8c00-6b37ed7d9b47/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.458180 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8715fd-8755-4bd6-82a7-bf49d61e1779" path="/var/lib/kubelet/pods/2d8715fd-8755-4bd6-82a7-bf49d61e1779/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.458684 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31bf41ed-98c7-44ed-abba-93b74a546e71" path="/var/lib/kubelet/pods/31bf41ed-98c7-44ed-abba-93b74a546e71/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.460284 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38cfcc24-6854-414a-9d6c-4769e1366eb1" path="/var/lib/kubelet/pods/38cfcc24-6854-414a-9d6c-4769e1366eb1/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.460777 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4b2578-8a31-4097-afd3-04bae6621094" path="/var/lib/kubelet/pods/3f4b2578-8a31-4097-afd3-04bae6621094/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.461279 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b414999-f3d0-4101-abe7-ed8c7747ce5f" path="/var/lib/kubelet/pods/4b414999-f3d0-4101-abe7-ed8c7747ce5f/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.462831 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6418a243-5699-42a3-8fab-d65c530c9951" path="/var/lib/kubelet/pods/6418a243-5699-42a3-8fab-d65c530c9951/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.463671 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80249ec8-3d5a-4020-bed2-83b8ecd32ab9" path="/var/lib/kubelet/pods/80249ec8-3d5a-4020-bed2-83b8ecd32ab9/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.464341 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939ed5f9-679d-44c4-8282-d1404d98b420" path="/var/lib/kubelet/pods/939ed5f9-679d-44c4-8282-d1404d98b420/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.464819 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c852e5a-26fe-4905-8483-4619c280f9c0" path="/var/lib/kubelet/pods/9c852e5a-26fe-4905-8483-4619c280f9c0/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.466559 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1048c2f-1504-465a-b0fb-da368d25f0ff" path="/var/lib/kubelet/pods/a1048c2f-1504-465a-b0fb-da368d25f0ff/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.467512 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8cd42ce-4a62-486b-9571-58d789ca2d38" path="/var/lib/kubelet/pods/b8cd42ce-4a62-486b-9571-58d789ca2d38/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.468152 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c49955b5-5145-4939-91e5-280569e18a33" path="/var/lib/kubelet/pods/c49955b5-5145-4939-91e5-280569e18a33/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.469496 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51cea52-ce54-4855-9d4c-97817c4cc6b0" path="/var/lib/kubelet/pods/c51cea52-ce54-4855-9d4c-97817c4cc6b0/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.470802 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf6c9856-8e0e-462e-a2bb-b21847078b54" path="/var/lib/kubelet/pods/cf6c9856-8e0e-462e-a2bb-b21847078b54/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.471404 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0854221-b7f1-4e7c-89bc-b9f14d1b29c2" path="/var/lib/kubelet/pods/d0854221-b7f1-4e7c-89bc-b9f14d1b29c2/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.471911 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d82484f3-c883-4c12-8ca1-6de8ead67139" path="/var/lib/kubelet/pods/d82484f3-c883-4c12-8ca1-6de8ead67139/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.472388 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115a51a9-6125-46e1-a960-a66cb9957d38-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.472422 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.472865 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9f1c72e-953b-45ba-ba69-c7574f82e8ad" path="/var/lib/kubelet/pods/d9f1c72e-953b-45ba-ba69-c7574f82e8ad/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.475013 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "bff6dd37-52b7-41b4-bc15-4f6436cdabc7" (UID: "bff6dd37-52b7-41b4-bc15-4f6436cdabc7"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.475958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" containerID="cri-o://3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5" gracePeriod=604800 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.476373 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ffaeb5-5dc3-4ead-8b43-701f81a8c965" path="/var/lib/kubelet/pods/f1ffaeb5-5dc3-4ead-8b43-701f81a8c965/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.478131 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb013bc6-805e-43d5-95f8-98597c33fa9e" path="/var/lib/kubelet/pods/fb013bc6-805e-43d5-95f8-98597c33fa9e/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.479719 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff8a308-89ab-409f-9053-6a363794df83" path="/var/lib/kubelet/pods/fff8a308-89ab-409f-9053-6a363794df83/volumes" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.480731 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerDied","Data":"a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd"} Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.512321 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.513579 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.514539 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.514591 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.573760 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bff6dd37-52b7-41b4-bc15-4f6436cdabc7-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.582701 4842 scope.go:117] "RemoveContainer" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.641909 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.642122 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.644596 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" containerID="cri-o://1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.644872 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" containerID="cri-o://49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750" gracePeriod=30 Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.662604 4842 scope.go:117] "RemoveContainer" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.664998 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf\": container with ID starting with bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf not found: ID does not exist" containerID="bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.665033 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf"} err="failed to get container status \"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf\": rpc error: code = NotFound desc = could not find container \"bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf\": container with ID starting with bfc6d5e3d20fcf147f2a351ad85a3e522f9d2e24e1de0ae3e5b2d48bdc682cbf not found: ID does not exist" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.665056 4842 scope.go:117] "RemoveContainer" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.667837 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec\": container with ID starting with 092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec not found: ID does not exist" containerID="092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.667879 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec"} err="failed to get container status \"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec\": rpc error: code = NotFound desc = could not find container \"092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec\": container with ID starting with 092ec23856ddf7c87f1db2b8f8dedaf3b76e7104cefaca2c00891af5dbd0e8ec not found: ID does not exist" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.667902 4842 scope.go:117] "RemoveContainer" containerID="12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.689187 4842 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/ovsdbserver-nb-0_openstack_openstack-network-exporter-12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573.log: no such file or directory" path="/var/log/containers/ovsdbserver-nb-0_openstack_openstack-network-exporter-12cbd4046092af30937f505c373f7a1da7ef6152e4425d8dee20e3b127f7d573.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.704113 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.753963 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.758841 4842 scope.go:117] "RemoveContainer" containerID="c1acee4708434e2281340e86c5dcc1aec94647c18fa79ec17661ad1f08020e9f" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.759870 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.770584 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4glck_a768c72b-df6d-463e-b085-996d7b910985/openstack-network-exporter/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.770665 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.772404 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796035 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796109 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796161 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.796199 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797127 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797144 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797147 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797200 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797275 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797335 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797398 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797441 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a31583c1-5fde-4763-a889-7257255fa217/ovsdbserver-sb/0.log" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797507 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797517 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797533 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797560 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") pod \"a768c72b-df6d-463e-b085-996d7b910985\" (UID: \"a768c72b-df6d-463e-b085-996d7b910985\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797582 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797635 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797669 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797714 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797744 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797768 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797794 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") pod \"82827ec9-ac05-41ab-988c-99083ccdb949\" (UID: \"82827ec9-ac05-41ab-988c-99083ccdb949\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.797815 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") pod \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\" (UID: \"e467a49f-fdc1-4a9e-9907-4425f5ec6177\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.798970 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799007 4842 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.798999 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts" (OuterVolumeSpecName: "scripts") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799023 4842 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a768c72b-df6d-463e-b085-996d7b910985-ovs-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799072 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.799100 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run" (OuterVolumeSpecName: "var-run") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.800422 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config" (OuterVolumeSpecName: "config") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.817858 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.837409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj" (OuterVolumeSpecName: "kube-api-access-h79wj") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "kube-api-access-h79wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.841195 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx" (OuterVolumeSpecName: "kube-api-access-hw7kx") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "kube-api-access-hw7kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.842042 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8" (OuterVolumeSpecName: "kube-api-access-vg6j8") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "kube-api-access-vg6j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.875688 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.880396 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.900149 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.901255 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902116 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902189 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902305 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902426 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902499 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.902628 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") pod \"a31583c1-5fde-4763-a889-7257255fa217\" (UID: \"a31583c1-5fde-4763-a889-7257255fa217\") " Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903132 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79wj\" (UniqueName: \"kubernetes.io/projected/a768c72b-df6d-463e-b085-996d7b910985-kube-api-access-h79wj\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903193 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e467a49f-fdc1-4a9e-9907-4425f5ec6177-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903403 4842 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903454 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7kx\" (UniqueName: \"kubernetes.io/projected/e467a49f-fdc1-4a9e-9907-4425f5ec6177-kube-api-access-hw7kx\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903501 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg6j8\" (UniqueName: \"kubernetes.io/projected/82827ec9-ac05-41ab-988c-99083ccdb949-kube-api-access-vg6j8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903546 4842 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e467a49f-fdc1-4a9e-9907-4425f5ec6177-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903654 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903716 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a768c72b-df6d-463e-b085-996d7b910985-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.903763 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.901188 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.901959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts" (OuterVolumeSpecName: "scripts") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.903851 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:17 crc kubenswrapper[4842]: E0202 07:09:17.904029 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:19.904013559 +0000 UTC m=+1385.281281471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.904856 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config" (OuterVolumeSpecName: "config") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.911307 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.930081 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26" (OuterVolumeSpecName: "kube-api-access-pzd26") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "kube-api-access-pzd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.938561 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.947826 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.977732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.978183 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "e467a49f-fdc1-4a9e-9907-4425f5ec6177" (UID: "e467a49f-fdc1-4a9e-9907-4425f5ec6177"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.986986 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:17 crc kubenswrapper[4842]: I0202 07:09:17.992075 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config" (OuterVolumeSpecName: "config") pod "82827ec9-ac05-41ab-988c-99083ccdb949" (UID: "82827ec9-ac05-41ab-988c-99083ccdb949"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.000373 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005481 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005508 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005520 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005530 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005539 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzd26\" (UniqueName: \"kubernetes.io/projected/a31583c1-5fde-4763-a889-7257255fa217-kube-api-access-pzd26\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005548 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005557 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a31583c1-5fde-4763-a889-7257255fa217-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005565 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a31583c1-5fde-4763-a889-7257255fa217-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005573 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e467a49f-fdc1-4a9e-9907-4425f5ec6177-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005581 4842 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.005590 4842 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/82827ec9-ac05-41ab-988c-99083ccdb949-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.006026 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.006194 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.007298 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.007379 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.026511 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.032480 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.038506 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a768c72b-df6d-463e-b085-996d7b910985" (UID: "a768c72b-df6d-463e-b085-996d7b910985"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.097073 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a31583c1-5fde-4763-a889-7257255fa217" (UID: "a31583c1-5fde-4763-a889-7257255fa217"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148740 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148858 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148913 4842 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.148962 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a31583c1-5fde-4763-a889-7257255fa217-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.149008 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a768c72b-df6d-463e-b085-996d7b910985-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.162874 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.173367 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.179387 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "nova_cell0" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="nova_cell0" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.181288 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" podUID="5130c998-8bfd-413c-887e-2100da96f6ce" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.441342 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469417 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a31583c1-5fde-4763-a889-7257255fa217/ovsdbserver-sb/0.log" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469489 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a31583c1-5fde-4763-a889-7257255fa217","Type":"ContainerDied","Data":"1455920f56b035102336b6030ca95115000c538e6e505a3b940faf00be0a7147"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469527 4842 scope.go:117] "RemoveContainer" containerID="c2eb9657c42f955c0263cd3a4cee2ba4741ed6bed3e4fa84ae9f59564a660266" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.469639 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.477328 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" event={"ID":"5130c998-8bfd-413c-887e-2100da96f6ce","Type":"ContainerStarted","Data":"edae9a46c8962c16de1f47c9594d864df221b1f93bbc0bdc1a42fba426cadc08"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.519550 4842 scope.go:117] "RemoveContainer" containerID="6cd00133afde786f3f39678d68f6c38b74703143640c9ef32412c8efe7f5aec9" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.519670 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.535165 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.557484 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559315 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559461 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559484 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559493 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559557 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559582 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.559593 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.568478 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.166:8776/healthcheck\": read tcp 10.217.0.2:36538->10.217.0.166:8776: read: connection reset by peer" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.571755 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.572065 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.572288 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.572530 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") pod \"590d1088-e964-43a6-b879-01c8b83d4147\" (UID: \"590d1088-e964-43a6-b879-01c8b83d4147\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.571763 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.578075 4842 generic.go:334] "Generic (PLEG): container finished" podID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerID="2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9" exitCode=143 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.578360 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerDied","Data":"2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.595130 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.595966 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4glck_a768c72b-df6d-463e-b085-996d7b910985/openstack-network-exporter/0.log" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.596827 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4glck" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.597674 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.597725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4glck" event={"ID":"a768c72b-df6d-463e-b085-996d7b910985","Type":"ContainerDied","Data":"3895bf2e90ce68029a65e13b1b0d09c0d18f1338f9ff1f7787b7a618bced51a5"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.603437 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.604126 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6" (OuterVolumeSpecName: "kube-api-access-wz5x6") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "kube-api-access-wz5x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.612224 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.623628 4842 scope.go:117] "RemoveContainer" containerID="a62e03cec1bb8e57732f90cf545c9f9612917cecf937c100e89f185e517fa7dd" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.630885 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerStarted","Data":"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.630942 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerStarted","Data":"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.631083 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" containerID="cri-o://c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.631692 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" containerID="cri-o://b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.643815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sgwrm" event={"ID":"e467a49f-fdc1-4a9e-9907-4425f5ec6177","Type":"ContainerDied","Data":"e22d47c5687c2823a538f3e86888cac139c920a3eeed02648ed069882ffa70ad"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.643918 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sgwrm" Feb 02 07:09:18 crc kubenswrapper[4842]: W0202 07:09:18.650304 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91519e6_bf55_4c08_8274_1d8a59f1ff52.slice/crio-16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e WatchSource:0}: Error finding container 16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e: Status 404 returned error can't find the container with id 16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652096 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652358 4842 generic.go:334] "Generic (PLEG): container finished" podID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerID="19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652490 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d","Type":"ContainerDied","Data":"19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.652568 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.661936 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.662060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ddd577785-8dp78" event={"ID":"82827ec9-ac05-41ab-988c-99083ccdb949","Type":"ContainerDied","Data":"3b795fd687296b78b29dffde7f9f5a14bcbd688f6a97aac6389de0b8b43b6094"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.662562 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "590d1088-e964-43a6-b879-01c8b83d4147" (UID: "590d1088-e964-43a6-b879-01c8b83d4147"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.671342 4842 generic.go:334] "Generic (PLEG): container finished" podID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.671400 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"bed4dadb-b854-4082-b18a-67f58543bb9a","Type":"ContainerDied","Data":"6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.671491 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.674761 4842 generic.go:334] "Generic (PLEG): container finished" podID="b912e45d-72e7-4250-9757-add1efcfb054" containerID="9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507" exitCode=1 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.674825 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerDied","Data":"9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.675048 4842 scope.go:117] "RemoveContainer" containerID="9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.680868 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" podStartSLOduration=5.680852285 podStartE2EDuration="5.680852285s" podCreationTimestamp="2026-02-02 07:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:09:18.65756958 +0000 UTC m=+1384.034837492" watchObservedRunningTime="2026-02-02 07:09:18.680852285 +0000 UTC m=+1384.058120197" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.683137 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687788 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687826 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687869 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687888 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687955 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.687989 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") pod \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688151 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") pod \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\" (UID: \"3a6e38b7-4a6d-4d93-af3d-5abac4efc44d\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688335 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688363 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688383 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688441 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688471 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") pod \"bed4dadb-b854-4082-b18a-67f58543bb9a\" (UID: \"bed4dadb-b854-4082-b18a-67f58543bb9a\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688517 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") pod \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\" (UID: \"88d00cbf-6e28-4be5-abc2-6c77e76de81e\") " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688793 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688897 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688908 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz5x6\" (UniqueName: \"kubernetes.io/projected/590d1088-e964-43a6-b879-01c8b83d4147-kube-api-access-wz5x6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688919 4842 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.688927 4842 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/590d1088-e964-43a6-b879-01c8b83d4147-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.688999 4842 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.689055 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:22.689025684 +0000 UTC m=+1388.066293596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : secret "barbican-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.700755 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.701328 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88d00cbf-6e28-4be5-abc2-6c77e76de81e" (UID: "88d00cbf-6e28-4be5-abc2-6c77e76de81e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.702484 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.702630 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.704301 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.704338 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-4glck"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.704580 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.707395 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8" (OuterVolumeSpecName: "kube-api-access-nm2d8") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "kube-api-access-nm2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709175 4842 generic.go:334] "Generic (PLEG): container finished" podID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerID="49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709268 4842 generic.go:334] "Generic (PLEG): container finished" podID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerID="1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec" exitCode=0 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerDied","Data":"49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.709426 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerDied","Data":"1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.719959 4842 generic.go:334] "Generic (PLEG): container finished" podID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerID="d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f" exitCode=143 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.720019 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerDied","Data":"d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.725538 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6" (OuterVolumeSpecName: "kube-api-access-8b6r6") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "kube-api-access-8b6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.728506 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.736968 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "cinder" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="cinder" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737151 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737408 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" containerID="cri-o://454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737508 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" containerID="cri-o://4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737539 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" containerID="cri-o://b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.737533 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" containerID="cri-o://bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.743382 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-716d-account-create-update-x4f2v" podUID="e91519e6-bf55-4c08-8274-1d8a59f1ff52" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.747407 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.759499 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "glance" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="glance" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.765016 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-2348-account-create-update-j8g5r" podUID="81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.774639 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.774849 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" containerID="cri-o://75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: W0202 07:09:18.781481 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90821e80_1367_4cf6_8087_fb83507223ec.slice/crio-6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078 WatchSource:0}: Error finding container 6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078: Status 404 returned error can't find the container with id 6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.785179 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm" (OuterVolumeSpecName: "kube-api-access-ljflm") pod "88d00cbf-6e28-4be5-abc2-6c77e76de81e" (UID: "88d00cbf-6e28-4be5-abc2-6c77e76de81e"). InnerVolumeSpecName "kube-api-access-ljflm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797046 4842 scope.go:117] "RemoveContainer" containerID="42408d707e9e2078b40d0e9f4ce34644fc07f209b2994b218bbf5f92d1f39ea7" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797593 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") pod \"barbican-api-654fdfd6b6-nrxvh\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797710 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b6r6\" (UniqueName: \"kubernetes.io/projected/bed4dadb-b854-4082-b18a-67f58543bb9a-kube-api-access-8b6r6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797722 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797733 4842 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797744 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljflm\" (UniqueName: \"kubernetes.io/projected/88d00cbf-6e28-4be5-abc2-6c77e76de81e-kube-api-access-ljflm\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797755 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm2d8\" (UniqueName: \"kubernetes.io/projected/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-kube-api-access-nm2d8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797788 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/bed4dadb-b854-4082-b18a-67f58543bb9a-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797800 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d00cbf-6e28-4be5-abc2-6c77e76de81e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.797811 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bed4dadb-b854-4082-b18a-67f58543bb9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.798385 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.798423 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:22.79840822 +0000 UTC m=+1388.175676122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.815827 4842 projected.go:194] Error preparing data for projected volume kube-api-access-h5vs6 for pod openstack/barbican-api-654fdfd6b6-nrxvh: failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.815885 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6 podName:72b63114-a275-4e32-9ad4-9f59e22151b3 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:22.815868447 +0000 UTC m=+1388.193136359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-h5vs6" (UniqueName: "kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6") pod "barbican-api-654fdfd6b6-nrxvh" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3") : failed to fetch token: serviceaccounts "barbican-barbican" not found Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.816210 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.816397 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.817480 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.828374 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.828481 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.842407 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.842613 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.842636 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.842741 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.844822 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:18 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: if [ -n "neutron" ]; then Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="neutron" Feb 02 07:09:18 crc kubenswrapper[4842]: else Feb 02 07:09:18 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:18 crc kubenswrapper[4842]: fi Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:18 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:18 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:18 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:18 crc kubenswrapper[4842]: # support updates Feb 02 07:09:18 crc kubenswrapper[4842]: Feb 02 07:09:18 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.846894 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-bfdd-account-create-update-z7blt" podUID="90821e80-1367-4cf6-8087-fb83507223ec" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.849386 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerStarted","Data":"dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6"} Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.849525 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" containerID="cri-o://04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.849585 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" containerID="cri-o://dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6" gracePeriod=30 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.862406 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.870069 4842 generic.go:334] "Generic (PLEG): container finished" podID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerID="5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d" exitCode=143 Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.870165 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.870742 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerDied","Data":"5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d"} Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.878616 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:18 crc kubenswrapper[4842]: E0202 07:09:18.878682 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.880232 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901464 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901507 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901517 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.901526 4842 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.921267 4842 scope.go:117] "RemoveContainer" containerID="19ce3a33fe25413f4f312112bb88f2cc8ceb19171589dbec9313d4c51f900ca1" Feb 02 07:09:18 crc kubenswrapper[4842]: I0202 07:09:18.924452 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.007023 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sgwrm"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.013548 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data" (OuterVolumeSpecName: "config-data") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.018292 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.037386 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.040109 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "bed4dadb-b854-4082-b18a-67f58543bb9a" (UID: "bed4dadb-b854-4082-b18a-67f58543bb9a"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.046865 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.059623 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" (UID: "3a6e38b7-4a6d-4d93-af3d-5abac4efc44d"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.077279 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ddd577785-8dp78"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.111333 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.111626 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" containerID="cri-o://95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4" gracePeriod=30 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.121257 4842 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/bed4dadb-b854-4082-b18a-67f58543bb9a-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.121282 4842 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.121294 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.177868 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.186625 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0ec7-account-create-update-x5rkz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.222999 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223359 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223370 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223390 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223396 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223409 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223415 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223427 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223433 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223440 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="mysql-bootstrap" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223445 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="mysql-bootstrap" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223453 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223459 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223490 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="init" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223496 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="init" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223503 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223509 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223518 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223526 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223539 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223546 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223553 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223558 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223568 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223573 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.223582 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223588 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223745 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a768c72b-df6d-463e-b085-996d7b910985" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223756 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="probe" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223766 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223778 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="ovsdbserver-sb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223788 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31583c1-5fde-4763-a889-7257255fa217" containerName="openstack-network-exporter" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223794 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" containerName="ovn-controller" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223804 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" containerName="ovsdbserver-nb" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223814 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" containerName="galera" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223827 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" containerName="dnsmasq-dns" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223837 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" containerName="nova-cell1-novncproxy-novncproxy" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.223842 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" containerName="cinder-scheduler" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.224411 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.225056 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" podStartSLOduration=6.225047335 podStartE2EDuration="6.225047335s" podCreationTimestamp="2026-02-02 07:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-02 07:09:18.931166864 +0000 UTC m=+1384.308434776" watchObservedRunningTime="2026-02-02 07:09:19.225047335 +0000 UTC m=+1384.602315247" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.228431 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.245930 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.258118 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-z87kx"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.259406 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:19 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: if [ -n "barbican" ]; then Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="barbican" Feb 02 07:09:19 crc kubenswrapper[4842]: else Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:19 crc kubenswrapper[4842]: fi Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:19 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:19 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:19 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:19 crc kubenswrapper[4842]: # support updates Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.262344 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-8e42-account-create-update-pssf7" podUID="92090cd2-6d30-4aec-81a2-f7d41c40b52d" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.273106 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.281896 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xh7mg"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.293605 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.295444 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.303859 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.320497 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.325242 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.326458 4842 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 02 07:09:19 crc kubenswrapper[4842]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[/bin/sh -c #!/bin/bash Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: MYSQL_CMD="mysql -h -u root -P 3306" Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: if [ -n "placement" ]; then Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="placement" Feb 02 07:09:19 crc kubenswrapper[4842]: else Feb 02 07:09:19 crc kubenswrapper[4842]: GRANT_DATABASE="*" Feb 02 07:09:19 crc kubenswrapper[4842]: fi Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: # going for maximum compatibility here: Feb 02 07:09:19 crc kubenswrapper[4842]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Feb 02 07:09:19 crc kubenswrapper[4842]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Feb 02 07:09:19 crc kubenswrapper[4842]: # 3. create user with CREATE but then do all password and TLS with ALTER to Feb 02 07:09:19 crc kubenswrapper[4842]: # support updates Feb 02 07:09:19 crc kubenswrapper[4842]: Feb 02 07:09:19 crc kubenswrapper[4842]: $MYSQL_CMD < logger="UnhandledError" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.329029 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-85ce-account-create-update-szhp5" podUID="79d5e0a1-8df4-4db1-aaf8-0d253163a522" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.352758 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.356742 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.371918 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.372150 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-cd7d86b6c-rcdjq" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" containerID="cri-o://4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" gracePeriod=30 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.402650 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.406269 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.435993 4842 scope.go:117] "RemoveContainer" containerID="b1f4bec090a15a8f33492373710dad94faf1e40a938d6cc9e964fd93f07eecf3" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453348 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453764 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs" (OuterVolumeSpecName: "logs") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453798 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.453825 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") pod \"72b63114-a275-4e32-9ad4-9f59e22151b3\" (UID: \"72b63114-a275-4e32-9ad4-9f59e22151b3\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454058 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454553 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454625 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454647 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454709 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.454855 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72b63114-a275-4e32-9ad4-9f59e22151b3-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.454955 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.454999 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:19.954982351 +0000 UTC m=+1385.332250263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.458567 4842 projected.go:194] Error preparing data for projected volume kube-api-access-jw8v8 for pod openstack/keystone-0ec7-account-create-update-9srfz: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.458629 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8 podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:19.958612334 +0000 UTC m=+1385.335880256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jw8v8" (UniqueName: "kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.459660 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115a51a9-6125-46e1-a960-a66cb9957d38" path="/var/lib/kubelet/pods/115a51a9-6125-46e1-a960-a66cb9957d38/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.463667 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.464513 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.465482 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.468202 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "72b63114-a275-4e32-9ad4-9f59e22151b3" (UID: "72b63114-a275-4e32-9ad4-9f59e22151b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.478814 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="226a55ec-a7c1-4c34-953c-bb4e549b0fc5" path="/var/lib/kubelet/pods/226a55ec-a7c1-4c34-953c-bb4e549b0fc5/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.479555 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b89146d-a545-4525-8744-723e0d9248b5" path="/var/lib/kubelet/pods/3b89146d-a545-4525-8744-723e0d9248b5/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.480059 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="590d1088-e964-43a6-b879-01c8b83d4147" path="/var/lib/kubelet/pods/590d1088-e964-43a6-b879-01c8b83d4147/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.486528 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6601a68f-34a5-4629-ac74-97cb14e809f3" path="/var/lib/kubelet/pods/6601a68f-34a5-4629-ac74-97cb14e809f3/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.487076 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82827ec9-ac05-41ab-988c-99083ccdb949" path="/var/lib/kubelet/pods/82827ec9-ac05-41ab-988c-99083ccdb949/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.503380 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.503607 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31583c1-5fde-4763-a889-7257255fa217" path="/var/lib/kubelet/pods/a31583c1-5fde-4763-a889-7257255fa217/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.508498 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a768c72b-df6d-463e-b085-996d7b910985" path="/var/lib/kubelet/pods/a768c72b-df6d-463e-b085-996d7b910985/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.513984 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff6dd37-52b7-41b4-bc15-4f6436cdabc7" path="/var/lib/kubelet/pods/bff6dd37-52b7-41b4-bc15-4f6436cdabc7/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.516958 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e467a49f-fdc1-4a9e-9907-4425f5ec6177" path="/var/lib/kubelet/pods/e467a49f-fdc1-4a9e-9907-4425f5ec6177/volumes" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.526153 4842 scope.go:117] "RemoveContainer" containerID="8bb94b1491e283b01c189ac6006d3fc23945dfbdff62fb805e090497b073e7c4" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.536936 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.547885 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555738 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555809 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555878 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") pod \"5130c998-8bfd-413c-887e-2100da96f6ce\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555957 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.555984 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556015 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556039 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556057 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") pod \"5130c998-8bfd-413c-887e-2100da96f6ce\" (UID: \"5130c998-8bfd-413c-887e-2100da96f6ce\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556093 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556133 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") pod \"9eff2351-b4e8-43cf-a232-9c36cb11c130\" (UID: \"9eff2351-b4e8-43cf-a232-9c36cb11c130\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556438 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556451 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556508 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556562 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556681 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556693 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556704 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556713 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.556721 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.557186 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.557913 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5130c998-8bfd-413c-887e-2100da96f6ce" (UID: "5130c998-8bfd-413c-887e-2100da96f6ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.558255 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.558896 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.561400 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.569377 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc" (OuterVolumeSpecName: "kube-api-access-pqwsc") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "kube-api-access-pqwsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.580324 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2" (OuterVolumeSpecName: "kube-api-access-r2cq2") pod "5130c998-8bfd-413c-887e-2100da96f6ce" (UID: "5130c998-8bfd-413c-887e-2100da96f6ce"). InnerVolumeSpecName "kube-api-access-r2cq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.589251 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"redhat-operators-zllm7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.602041 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": dial tcp 10.217.0.201:3000: connect: connection refused" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.606235 4842 scope.go:117] "RemoveContainer" containerID="6befc904ad1bc362edb2452ad98dace7a8d19908d934b410bdb62de4fb72339d" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.626887 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.630596 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.631082 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jw8v8 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-0ec7-account-create-update-9srfz" podUID="db5059ce-9214-449d-a8d5-1b6ab7447e65" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.664640 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") pod \"8dad4bc1-b1ae-436c-925e-986d33b77e51\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.664696 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") pod \"8dad4bc1-b1ae-436c-925e-986d33b77e51\" (UID: \"8dad4bc1-b1ae-436c-925e-986d33b77e51\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665178 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5130c998-8bfd-413c-887e-2100da96f6ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665194 4842 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665205 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2cq2\" (UniqueName: \"kubernetes.io/projected/5130c998-8bfd-413c-887e-2100da96f6ce-kube-api-access-r2cq2\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665227 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqwsc\" (UniqueName: \"kubernetes.io/projected/9eff2351-b4e8-43cf-a232-9c36cb11c130-kube-api-access-pqwsc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665236 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eff2351-b4e8-43cf-a232-9c36cb11c130-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.665654 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8dad4bc1-b1ae-436c-925e-986d33b77e51" (UID: "8dad4bc1-b1ae-436c-925e-986d33b77e51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.676596 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6ctcq"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.685201 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t" (OuterVolumeSpecName: "kube-api-access-skr4t") pod "8dad4bc1-b1ae-436c-925e-986d33b77e51" (UID: "8dad4bc1-b1ae-436c-925e-986d33b77e51"). InnerVolumeSpecName "kube-api-access-skr4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.686029 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data" (OuterVolumeSpecName: "config-data") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.700325 4842 scope.go:117] "RemoveContainer" containerID="29807641fcc1ca11bd99ef7a60eab40eeea4379d7aa3a9b641c81ec27d1ba950" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.713924 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.717292 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.732967 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.733057 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.746090 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.748385 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.748417 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.749039 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.751890 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" containerID="cri-o://c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" gracePeriod=30 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.754784 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.755670 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.766922 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.766975 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.766994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767045 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767070 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767360 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dad4bc1-b1ae-436c-925e-986d33b77e51-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767379 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767388 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767397 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767406 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skr4t\" (UniqueName: \"kubernetes.io/projected/8dad4bc1-b1ae-436c-925e-986d33b77e51-kube-api-access-skr4t\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.767942 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.768360 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs" (OuterVolumeSpecName: "logs") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.782245 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.783544 4842 scope.go:117] "RemoveContainer" containerID="7321f950b4c167a7b34d5c400d350da10c11bc84a859361985534a57f9758316" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.783955 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts" (OuterVolumeSpecName: "scripts") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.809447 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.815483 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.832188 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.842038 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.845947 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.864342 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eff2351-b4e8-43cf-a232-9c36cb11c130" (UID: "9eff2351-b4e8-43cf-a232-9c36cb11c130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.870969 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871030 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871055 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871126 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") pod \"900b2d20-01c8-47e0-8271-ccfd8549d468\" (UID: \"900b2d20-01c8-47e0-8271-ccfd8549d468\") " Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871813 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eff2351-b4e8-43cf-a232-9c36cb11c130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871827 4842 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/900b2d20-01c8-47e0-8271-ccfd8549d468-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871837 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/900b2d20-01c8-47e0-8271-ccfd8549d468-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871846 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.871855 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.895928 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.900989 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-szhp5" event={"ID":"79d5e0a1-8df4-4db1-aaf8-0d253163a522","Type":"ContainerStarted","Data":"92c5616de7100c6457ed5b0dcd602dadf7228bf9da3a33c8035d364e9130e12d"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.911857 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4" (OuterVolumeSpecName: "kube-api-access-4fmp4") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "kube-api-access-4fmp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.918598 4842 generic.go:334] "Generic (PLEG): container finished" podID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerID="04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc" exitCode=143 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.918646 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerDied","Data":"04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.925793 4842 generic.go:334] "Generic (PLEG): container finished" podID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerID="c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab" exitCode=0 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.925843 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerDied","Data":"c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.931771 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-pssf7" event={"ID":"92090cd2-6d30-4aec-81a2-f7d41c40b52d","Type":"ContainerStarted","Data":"841933402afec6053b59c1d117b644948866331ecb15d4942a1241af82efdbd6"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.939536 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941028 4842 generic.go:334] "Generic (PLEG): container finished" podID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" exitCode=0 Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941087 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerDied","Data":"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941113 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"900b2d20-01c8-47e0-8271-ccfd8549d468","Type":"ContainerDied","Data":"f8428d2a8e93132509de41794f4b8946214003b09ad9c320fa782cef8d54fe76"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941130 4842 scope.go:117] "RemoveContainer" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.941279 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.959985 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-659598d599-lpzh5" event={"ID":"9eff2351-b4e8-43cf-a232-9c36cb11c130","Type":"ContainerDied","Data":"c97160040d0350fa9bd5e1bbc3b5084d4e4f379ea92abc97f8017a5311a0c9cf"} Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.960066 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-659598d599-lpzh5" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972489 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972768 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972906 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.972964 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: I0202 07:09:19.973015 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmp4\" (UniqueName: \"kubernetes.io/projected/900b2d20-01c8-47e0-8271-ccfd8549d468-kube-api-access-4fmp4\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.972992 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.973171 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:20.973155246 +0000 UTC m=+1386.350423158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : configmap "openstack-scripts" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.973434 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.973502 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.973485904 +0000 UTC m=+1389.350753816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.975804 4842 projected.go:194] Error preparing data for projected volume kube-api-access-jw8v8 for pod openstack/keystone-0ec7-account-create-update-9srfz: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:19 crc kubenswrapper[4842]: E0202 07:09:19.997640 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8 podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:20.997610861 +0000 UTC m=+1386.374878773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jw8v8" (UniqueName: "kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.003299 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data" (OuterVolumeSpecName: "config-data") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.051500 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "900b2d20-01c8-47e0-8271-ccfd8549d468" (UID: "900b2d20-01c8-47e0-8271-ccfd8549d468"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.067382 4842 generic.go:334] "Generic (PLEG): container finished" podID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerID="75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9" exitCode=2 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.067611 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerDied","Data":"75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.071076 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" event={"ID":"88d00cbf-6e28-4be5-abc2-6c77e76de81e","Type":"ContainerDied","Data":"595b44b024cc413350c4c52a2edd391699f6565dcef71575de95c9a8d45985fb"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.071095 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-17c9-account-create-update-6xs6n" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.075995 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.076323 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/900b2d20-01c8-47e0-8271-ccfd8549d468-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.085545 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-x4f2v" event={"ID":"e91519e6-bf55-4c08-8274-1d8a59f1ff52","Type":"ContainerStarted","Data":"16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.145553 4842 generic.go:334] "Generic (PLEG): container finished" podID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" exitCode=143 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.145636 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerDied","Data":"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.174424 4842 generic.go:334] "Generic (PLEG): container finished" podID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerID="bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00" exitCode=0 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.174500 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerDied","Data":"bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.204813 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerStarted","Data":"13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.205481 4842 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-kl9p2" secret="" err="secret \"galera-openstack-dockercfg-xfhgf\" not found" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.205511 4842 scope.go:117] "RemoveContainer" containerID="13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.205882 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-kl9p2_openstack(b912e45d-72e7-4250-9757-add1efcfb054)\"" pod="openstack/root-account-create-update-kl9p2" podUID="b912e45d-72e7-4250-9757-add1efcfb054" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.234532 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" event={"ID":"5130c998-8bfd-413c-887e-2100da96f6ce","Type":"ContainerDied","Data":"edae9a46c8962c16de1f47c9594d864df221b1f93bbc0bdc1a42fba426cadc08"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.234649 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7f00-account-create-update-wfvs9" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.259392 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-z7blt" event={"ID":"90821e80-1367-4cf6-8087-fb83507223ec","Type":"ContainerStarted","Data":"6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.304786 4842 scope.go:117] "RemoveContainer" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.320861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-j8g5r" event={"ID":"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d","Type":"ContainerStarted","Data":"f55c42fda20e7505f223b55e3afbf9284af6c4d7c17fcc411b0d5c1ee7acf9ca"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.335733 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.337658 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-89ff-account-create-update-fbkfk" event={"ID":"8dad4bc1-b1ae-436c-925e-986d33b77e51","Type":"ContainerDied","Data":"19b5b9e6138f019e100c7874a7e9ab2b0be50a7d46a7fd240461e516fb3462c0"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.337808 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-89ff-account-create-update-fbkfk" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.377355 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.388973 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.389027 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts podName:b912e45d-72e7-4250-9757-add1efcfb054 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:20.889011945 +0000 UTC m=+1386.266279857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts") pod "root-account-create-update-kl9p2" (UID: "b912e45d-72e7-4250-9757-add1efcfb054") : configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.397150 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.405945 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3" exitCode=0 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.405990 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef" exitCode=2 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406000 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621" exitCode=0 Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406079 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406754 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7f00-account-create-update-wfvs9"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406792 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406815 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406826 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621"} Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.406893 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-654fdfd6b6-nrxvh" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.442820 4842 scope.go:117] "RemoveContainer" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.444358 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab\": container with ID starting with 35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab not found: ID does not exist" containerID="35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.444388 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab"} err="failed to get container status \"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab\": rpc error: code = NotFound desc = could not find container \"35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab\": container with ID starting with 35494b429ef02861ccac7eb4515711429c34dfc143b4a511f2c7253734f037ab not found: ID does not exist" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.444408 4842 scope.go:117] "RemoveContainer" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.446250 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070\": container with ID starting with bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070 not found: ID does not exist" containerID="bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.446277 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070"} err="failed to get container status \"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070\": rpc error: code = NotFound desc = could not find container \"bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070\": container with ID starting with bd926e0b40deedf62e76e58772126de2d573692a9f905d9665b40c94008fd070 not found: ID does not exist" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.446295 4842 scope.go:117] "RemoveContainer" containerID="49dfdfa99a47811582b530171bcdb672444bf58776e14b517fe66bf3f7abc750" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.475500 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.491704 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.491873 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.491994 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492587 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492766 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492832 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.492909 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.493023 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") pod \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\" (UID: \"54aa018a-3e7e-4c95-9c1d-387543ed5af0\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.493140 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.503585 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2" (OuterVolumeSpecName: "kube-api-access-kz5c2") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "kube-api-access-kz5c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.505187 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.505255 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-17c9-account-create-update-6xs6n"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.505272 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.507194 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs" (OuterVolumeSpecName: "logs") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.560206 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data" (OuterVolumeSpecName: "config-data") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.562453 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268" (OuterVolumeSpecName: "kube-api-access-7x268") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-api-access-7x268". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.576012 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596026 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz5c2\" (UniqueName: \"kubernetes.io/projected/54aa018a-3e7e-4c95-9c1d-387543ed5af0-kube-api-access-kz5c2\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596054 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x268\" (UniqueName: \"kubernetes.io/projected/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-api-access-7x268\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596063 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.596072 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54aa018a-3e7e-4c95-9c1d-387543ed5af0-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.654907 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.663794 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.679689 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.680887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.690110 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-659598d599-lpzh5"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697830 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54aa018a-3e7e-4c95-9c1d-387543ed5af0" (UID: "54aa018a-3e7e-4c95-9c1d-387543ed5af0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697952 4842 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697980 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697989 4842 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.697999 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54aa018a-3e7e-4c95-9c1d-387543ed5af0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.709729 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.722148 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-654fdfd6b6-nrxvh"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.737794 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.744516 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-89ff-account-create-update-fbkfk"] Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.798983 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.799169 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") pod \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\" (UID: \"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c\") " Feb 02 07:09:20 crc kubenswrapper[4842]: W0202 07:09:20.799701 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c/volumes/kubernetes.io~secret/kube-state-metrics-tls-certs Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.799718 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" (UID: "6b11cfdf-ed7a-48ce-97eb-e03cd6be314c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.900971 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72b63114-a275-4e32-9ad4-9f59e22151b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.900999 4842 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: I0202 07:09:20.901009 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5vs6\" (UniqueName: \"kubernetes.io/projected/72b63114-a275-4e32-9ad4-9f59e22151b3-kube-api-access-h5vs6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.901085 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.901128 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts podName:b912e45d-72e7-4250-9757-add1efcfb054 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:21.901114144 +0000 UTC m=+1387.278382056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts") pod "root-account-create-update-kl9p2" (UID: "b912e45d-72e7-4250-9757-add1efcfb054") : configmap "openstack-scripts" not found Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.952575 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.956880 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.991649 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:20 crc kubenswrapper[4842]: E0202 07:09:20.991710 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.002145 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.002241 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") pod \"keystone-0ec7-account-create-update-9srfz\" (UID: \"db5059ce-9214-449d-a8d5-1b6ab7447e65\") " pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.002726 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.002763 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.002750992 +0000 UTC m=+1388.380018904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.005482 4842 projected.go:194] Error preparing data for projected volume kube-api-access-jw8v8 for pod openstack/keystone-0ec7-account-create-update-9srfz: failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.005554 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8 podName:db5059ce-9214-449d-a8d5-1b6ab7447e65 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.005531973 +0000 UTC m=+1388.382799885 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jw8v8" (UniqueName: "kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8") pod "keystone-0ec7-account-create-update-9srfz" (UID: "db5059ce-9214-449d-a8d5-1b6ab7447e65") : failed to fetch token: serviceaccounts "galera-openstack" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.146468 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e4d672b_cb7a_406d_ab62_12745f300ef0.slice/crio-95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf00b7c2b_79ea_4cd1_80c3_f74f7e398ffd.slice/crio-36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod679e6e39_029a_452e_a375_bf0b937e3fbe.slice/crio-conmon-aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e4d672b_cb7a_406d_ab62_12745f300ef0.slice/crio-conmon-95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod174fcd53_40ab_4d19_a317_bc5cd117d2a4.slice/crio-conmon-b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf00b7c2b_79ea_4cd1_80c3_f74f7e398ffd.slice/crio-conmon-36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.415838 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2348-account-create-update-j8g5r" event={"ID":"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d","Type":"ContainerDied","Data":"f55c42fda20e7505f223b55e3afbf9284af6c4d7c17fcc411b0d5c1ee7acf9ca"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.415881 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55c42fda20e7505f223b55e3afbf9284af6c4d7c17fcc411b0d5c1ee7acf9ca" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417833 4842 generic.go:334] "Generic (PLEG): container finished" podID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerID="83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417876 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerDied","Data":"83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417893 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc5c967fd-w6ljx" event={"ID":"eb022115-b53a-4ed0-a2a0-b44644dc26a7","Type":"ContainerDied","Data":"fd6b7a98a2a46a28710ac379918018f758437a367de16692a4e1403ffd79ebbd"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.417902 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd6b7a98a2a46a28710ac379918018f758437a367de16692a4e1403ffd79ebbd" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419026 4842 generic.go:334] "Generic (PLEG): container finished" podID="34f55116-a518-4f21-8816-6f8232a6f68d" containerID="72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419060 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerDied","Data":"72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419074 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"34f55116-a518-4f21-8816-6f8232a6f68d","Type":"ContainerDied","Data":"03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.419083 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d59292614dd942c7945dc3ee9854947498f4230085fae20f5c0d549dbedbf1" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420329 4842 generic.go:334] "Generic (PLEG): container finished" podID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerID="50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420422 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerDied","Data":"50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420449 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c96a7e1-78c3-449d-9200-735db4ee7086","Type":"ContainerDied","Data":"1eecf23079bd634775107b900580aa4bb87379a656bc114e56acf8d85609c009"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.420460 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eecf23079bd634775107b900580aa4bb87379a656bc114e56acf8d85609c009" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.421298 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8e42-account-create-update-pssf7" event={"ID":"92090cd2-6d30-4aec-81a2-f7d41c40b52d","Type":"ContainerDied","Data":"841933402afec6053b59c1d117b644948866331ecb15d4942a1241af82efdbd6"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.421320 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841933402afec6053b59c1d117b644948866331ecb15d4942a1241af82efdbd6" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422425 4842 generic.go:334] "Generic (PLEG): container finished" podID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerID="95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422470 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerDied","Data":"95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422508 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2e4d672b-cb7a-406d-ab62-12745f300ef0","Type":"ContainerDied","Data":"ccad06562fb6f40d062777e6d3a6e4d9830ae7a447085c52c329d40fd37ced11"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.422519 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccad06562fb6f40d062777e6d3a6e4d9830ae7a447085c52c329d40fd37ced11" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.423489 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-85ce-account-create-update-szhp5" event={"ID":"79d5e0a1-8df4-4db1-aaf8-0d253163a522","Type":"ContainerDied","Data":"92c5616de7100c6457ed5b0dcd602dadf7228bf9da3a33c8035d364e9130e12d"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.423519 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c5616de7100c6457ed5b0dcd602dadf7228bf9da3a33c8035d364e9130e12d" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.424468 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bfdd-account-create-update-z7blt" event={"ID":"90821e80-1367-4cf6-8087-fb83507223ec","Type":"ContainerDied","Data":"6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.424490 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb3fd3a05582a17982ba597c392cf5f579dd70cea15a2dd1fd0c7422d60a078" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425759 4842 generic.go:334] "Generic (PLEG): container finished" podID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerID="c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425801 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerDied","Data":"c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425818 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b5c67fdbd-zsx96" event={"ID":"c56025ce-3772-435d-bdba-a4d1ba9d6e2f","Type":"ContainerDied","Data":"33a7212242745098719539d77d7d2ab10cc0d6841f34ba8ac2dabc8a942c26b5"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.425827 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a7212242745098719539d77d7d2ab10cc0d6841f34ba8ac2dabc8a942c26b5" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.427441 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"25609b1c-e1e9-4633-b3e3-93bd2f4396de","Type":"ContainerDied","Data":"22718259310cd947182a28b08951d593ee087b709a27af6ee23d9b940e93c5ac"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.427467 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22718259310cd947182a28b08951d593ee087b709a27af6ee23d9b940e93c5ac" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.429289 4842 generic.go:334] "Generic (PLEG): container finished" podID="b912e45d-72e7-4250-9757-add1efcfb054" containerID="13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240" exitCode=1 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.429349 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerDied","Data":"13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.440263 4842 generic.go:334] "Generic (PLEG): container finished" podID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerID="36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.442273 4842 generic.go:334] "Generic (PLEG): container finished" podID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerID="aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.447018 4842 generic.go:334] "Generic (PLEG): container finished" podID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.448705 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6e38b7-4a6d-4d93-af3d-5abac4efc44d" path="/var/lib/kubelet/pods/3a6e38b7-4a6d-4d93-af3d-5abac4efc44d/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.449262 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4450e400-557b-4092-8f73-124910137dc4" path="/var/lib/kubelet/pods/4450e400-557b-4092-8f73-124910137dc4/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.449762 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5130c998-8bfd-413c-887e-2100da96f6ce" path="/var/lib/kubelet/pods/5130c998-8bfd-413c-887e-2100da96f6ce/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.450090 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b63114-a275-4e32-9ad4-9f59e22151b3" path="/var/lib/kubelet/pods/72b63114-a275-4e32-9ad4-9f59e22151b3/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.451423 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d00cbf-6e28-4be5-abc2-6c77e76de81e" path="/var/lib/kubelet/pods/88d00cbf-6e28-4be5-abc2-6c77e76de81e/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.452994 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dad4bc1-b1ae-436c-925e-986d33b77e51" path="/var/lib/kubelet/pods/8dad4bc1-b1ae-436c-925e-986d33b77e51/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.453181 4842 generic.go:334] "Generic (PLEG): container finished" podID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.468630 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" path="/var/lib/kubelet/pods/900b2d20-01c8-47e0-8271-ccfd8549d468/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.469793 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.470287 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" path="/var/lib/kubelet/pods/9eff2351-b4e8-43cf-a232-9c36cb11c130/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.472077 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed4dadb-b854-4082-b18a-67f58543bb9a" path="/var/lib/kubelet/pods/bed4dadb-b854-4082-b18a-67f58543bb9a/volumes" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473145 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-716d-account-create-update-x4f2v" event={"ID":"e91519e6-bf55-4c08-8274-1d8a59f1ff52","Type":"ContainerDied","Data":"16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473177 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16450eee390031a65a59938215b79e0eab96c41ea0a94add55f20f842e142b6e" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473188 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerDied","Data":"36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473210 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerDied","Data":"aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473259 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerDied","Data":"aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473274 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerDied","Data":"b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.473288 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6b11cfdf-ed7a-48ce-97eb-e03cd6be314c","Type":"ContainerDied","Data":"c5471f47cbc6e33e200626c1c2261b0fedfaae9cf67bbd6b8d7f8382239e8d5f"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.486834 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.501253 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.515808 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.517428 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.522245 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.524957 4842 scope.go:117] "RemoveContainer" containerID="1e413e67564e718a498ac35eeced53092dbd9372163eaf63c69cfa47632f99ec" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.534890 4842 generic.go:334] "Generic (PLEG): container finished" podID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerID="b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1" exitCode=0 Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.534983 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.537971 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0ec7-account-create-update-9srfz" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.538245 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.539570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"54aa018a-3e7e-4c95-9c1d-387543ed5af0","Type":"ContainerDied","Data":"97d85497136bca54efa2ce8c8d3033b9016ab0e739dcabcdf04a8ad306a7c1b7"} Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.568050 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.598125 4842 scope.go:117] "RemoveContainer" containerID="9926781ae9dc15022af00f978a6d8014ea831a07a27df31142281c3ba8914507" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.609441 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.610530 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.616168 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.626423 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.635328 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.641393 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.644566 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661406 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") pod \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661482 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") pod \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661566 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661611 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661632 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") pod \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661665 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") pod \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\" (UID: \"92090cd2-6d30-4aec-81a2-f7d41c40b52d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661701 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661730 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") pod \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\" (UID: \"79d5e0a1-8df4-4db1-aaf8-0d253163a522\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661768 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") pod \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\" (UID: \"81e3e639-93f4-48d1-8a2f-89e48bcc5f1d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661788 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") pod \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661817 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661850 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661881 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.661911 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") pod \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\" (UID: \"e91519e6-bf55-4c08-8274-1d8a59f1ff52\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.662736 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e91519e6-bf55-4c08-8274-1d8a59f1ff52" (UID: "e91519e6-bf55-4c08-8274-1d8a59f1ff52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.663262 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92090cd2-6d30-4aec-81a2-f7d41c40b52d" (UID: "92090cd2-6d30-4aec-81a2-f7d41c40b52d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.667753 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng" (OuterVolumeSpecName: "kube-api-access-rc9ng") pod "79d5e0a1-8df4-4db1-aaf8-0d253163a522" (UID: "79d5e0a1-8df4-4db1-aaf8-0d253163a522"). InnerVolumeSpecName "kube-api-access-rc9ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.667993 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" (UID: "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.672572 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79d5e0a1-8df4-4db1-aaf8-0d253163a522" (UID: "79d5e0a1-8df4-4db1-aaf8-0d253163a522"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.675276 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs" (OuterVolumeSpecName: "logs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.675711 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn" (OuterVolumeSpecName: "kube-api-access-q9mmn") pod "e91519e6-bf55-4c08-8274-1d8a59f1ff52" (UID: "e91519e6-bf55-4c08-8274-1d8a59f1ff52"). InnerVolumeSpecName "kube-api-access-q9mmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.680887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx" (OuterVolumeSpecName: "kube-api-access-nh8lx") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "kube-api-access-nh8lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.687250 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.695692 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x" (OuterVolumeSpecName: "kube-api-access-8cg6x") pod "92090cd2-6d30-4aec-81a2-f7d41c40b52d" (UID: "92090cd2-6d30-4aec-81a2-f7d41c40b52d"). InnerVolumeSpecName "kube-api-access-8cg6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.703837 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data" (OuterVolumeSpecName: "config-data") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.704070 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.715416 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf" (OuterVolumeSpecName: "kube-api-access-c9wrf") pod "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" (UID: "81e3e639-93f4-48d1-8a2f-89e48bcc5f1d"). InnerVolumeSpecName "kube-api-access-c9wrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.715507 4842 scope.go:117] "RemoveContainer" containerID="75aec13501e8ac4a78490209fc3281c84b435ac2ebcc48667746bb6eb38e36e9" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.724103 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.726055 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.728141 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.731767 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.731814 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.749086 4842 scope.go:117] "RemoveContainer" containerID="c6b2aef7c5907fec1f821bb206e985dfa1c10ebd9ed998f2f05ec13c6cf132ab" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763782 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763826 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763864 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763884 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763910 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763931 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763956 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.763989 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764005 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764024 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764046 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764077 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764105 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764134 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764160 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764177 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764195 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764227 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764245 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764270 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") pod \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\" (UID: \"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764303 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764318 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") pod \"90821e80-1367-4cf6-8087-fb83507223ec\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764401 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764438 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.764455 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767135 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs" (OuterVolumeSpecName: "logs") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767481 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767508 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767552 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767571 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") pod \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\" (UID: \"c56025ce-3772-435d-bdba-a4d1ba9d6e2f\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767668 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767712 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") pod \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\" (UID: \"eb022115-b53a-4ed0-a2a0-b44644dc26a7\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767739 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") pod \"2e4d672b-cb7a-406d-ab62-12745f300ef0\" (UID: \"2e4d672b-cb7a-406d-ab62-12745f300ef0\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767759 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767777 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767795 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767813 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") pod \"90821e80-1367-4cf6-8087-fb83507223ec\" (UID: \"90821e80-1367-4cf6-8087-fb83507223ec\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767834 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767860 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") pod \"34f55116-a518-4f21-8816-6f8232a6f68d\" (UID: \"34f55116-a518-4f21-8816-6f8232a6f68d\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.767875 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") pod \"6c96a7e1-78c3-449d-9200-735db4ee7086\" (UID: \"6c96a7e1-78c3-449d-9200-735db4ee7086\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768394 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9wrf\" (UniqueName: \"kubernetes.io/projected/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-kube-api-access-c9wrf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768408 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768417 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25609b1c-e1e9-4633-b3e3-93bd2f4396de-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768426 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79d5e0a1-8df4-4db1-aaf8-0d253163a522-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768435 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cg6x\" (UniqueName: \"kubernetes.io/projected/92090cd2-6d30-4aec-81a2-f7d41c40b52d-kube-api-access-8cg6x\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768444 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768453 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc9ng\" (UniqueName: \"kubernetes.io/projected/79d5e0a1-8df4-4db1-aaf8-0d253163a522-kube-api-access-rc9ng\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768462 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768470 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9mmn\" (UniqueName: \"kubernetes.io/projected/e91519e6-bf55-4c08-8274-1d8a59f1ff52-kube-api-access-q9mmn\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768478 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768487 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh8lx\" (UniqueName: \"kubernetes.io/projected/25609b1c-e1e9-4633-b3e3-93bd2f4396de-kube-api-access-nh8lx\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768495 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e91519e6-bf55-4c08-8274-1d8a59f1ff52-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.768504 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92090cd2-6d30-4aec-81a2-f7d41c40b52d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.774660 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.781287 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.782111 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs" (OuterVolumeSpecName: "logs") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.784768 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs" (OuterVolumeSpecName: "logs") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.787839 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts" (OuterVolumeSpecName: "scripts") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.788342 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.788420 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt" (OuterVolumeSpecName: "kube-api-access-d5nxt") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "kube-api-access-d5nxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.788588 4842 scope.go:117] "RemoveContainer" containerID="415d21f9580ea68e52aa649eacebbe3550d2da28410a54eb695a4a912d91fbdd" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.789913 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90821e80-1367-4cf6-8087-fb83507223ec" (UID: "90821e80-1367-4cf6-8087-fb83507223ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.789959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data" (OuterVolumeSpecName: "config-data") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.790727 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs" (OuterVolumeSpecName: "logs") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.794667 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.794962 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.802613 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.807437 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs" (OuterVolumeSpecName: "logs") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.813055 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs" (OuterVolumeSpecName: "kube-api-access-5svcs") pod "90821e80-1367-4cf6-8087-fb83507223ec" (UID: "90821e80-1367-4cf6-8087-fb83507223ec"). InnerVolumeSpecName "kube-api-access-5svcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.813943 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts" (OuterVolumeSpecName: "scripts") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814058 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l" (OuterVolumeSpecName: "kube-api-access-9rq6l") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "kube-api-access-9rq6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814650 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx" (OuterVolumeSpecName: "kube-api-access-ngbgx") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "kube-api-access-ngbgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814717 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9" (OuterVolumeSpecName: "kube-api-access-rkmc9") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "kube-api-access-rkmc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.814752 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5" (OuterVolumeSpecName: "kube-api-access-r9pr5") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "kube-api-access-r9pr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.817878 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts" (OuterVolumeSpecName: "scripts") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.818543 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.819618 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.820683 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.825873 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0ec7-account-create-update-9srfz"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.837515 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.838078 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.848507 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk" (OuterVolumeSpecName: "kube-api-access-zscmk") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "kube-api-access-zscmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.854046 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.869359 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.869700 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870092 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") pod \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\" (UID: \"25609b1c-e1e9-4633-b3e3-93bd2f4396de\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870368 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870462 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.871382 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.871479 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.872168 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.872383 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.872480 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") pod \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\" (UID: \"174fcd53-40ab-4d19-a317-bc5cd117d2a4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873032 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873488 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5nxt\" (UniqueName: \"kubernetes.io/projected/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-kube-api-access-d5nxt\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873557 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873621 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873686 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873738 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873787 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5svcs\" (UniqueName: \"kubernetes.io/projected/90821e80-1367-4cf6-8087-fb83507223ec-kube-api-access-5svcs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873835 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c96a7e1-78c3-449d-9200-735db4ee7086-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873910 4842 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873981 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874033 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zscmk\" (UniqueName: \"kubernetes.io/projected/eb022115-b53a-4ed0-a2a0-b44644dc26a7-kube-api-access-zscmk\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874082 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874133 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874182 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngbgx\" (UniqueName: \"kubernetes.io/projected/2e4d672b-cb7a-406d-ab62-12745f300ef0-kube-api-access-ngbgx\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874258 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9pr5\" (UniqueName: \"kubernetes.io/projected/34f55116-a518-4f21-8816-6f8232a6f68d-kube-api-access-r9pr5\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874333 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874392 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90821e80-1367-4cf6-8087-fb83507223ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874476 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rq6l\" (UniqueName: \"kubernetes.io/projected/6c96a7e1-78c3-449d-9200-735db4ee7086-kube-api-access-9rq6l\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874530 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkmc9\" (UniqueName: \"kubernetes.io/projected/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-kube-api-access-rkmc9\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874580 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb022115-b53a-4ed0-a2a0-b44644dc26a7-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874638 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e4d672b-cb7a-406d-ab62-12745f300ef0-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874688 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34f55116-a518-4f21-8816-6f8232a6f68d-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874743 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.874795 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.871955 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.870038 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: W0202 07:09:21.870206 4842 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/25609b1c-e1e9-4633-b3e3-93bd2f4396de/volumes/kubernetes.io~secret/internal-tls-certs Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.876634 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25609b1c-e1e9-4633-b3e3-93bd2f4396de" (UID: "25609b1c-e1e9-4633-b3e3-93bd2f4396de"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.873228 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.884862 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.907912 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts" (OuterVolumeSpecName: "scripts") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.919622 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.928615 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq" (OuterVolumeSpecName: "kube-api-access-4btlq") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "kube-api-access-4btlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.939764 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.958515 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.964777 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975634 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") pod \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975750 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975789 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") pod \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975839 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") pod \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.975865 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976141 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs" (OuterVolumeSpecName: "logs") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976308 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976359 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976466 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") pod \"679e6e39-029a-452e-a375-bf0b937e3fbe\" (UID: \"679e6e39-029a-452e-a375-bf0b937e3fbe\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976506 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") pod \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\" (UID: \"4850512e-bbc8-468d-94ef-1d1be3b0b49c\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976542 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") pod \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976562 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") pod \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\" (UID: \"1f94c60e-a4fc-4b7d-96cd-367d46a731c4\") " Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976942 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25609b1c-e1e9-4633-b3e3-93bd2f4396de-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976956 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976966 4842 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976975 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976984 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.976993 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977005 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db5059ce-9214-449d-a8d5-1b6ab7447e65-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977017 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4btlq\" (UniqueName: \"kubernetes.io/projected/174fcd53-40ab-4d19-a317-bc5cd117d2a4-kube-api-access-4btlq\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977027 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/679e6e39-029a-452e-a375-bf0b937e3fbe-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977035 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw8v8\" (UniqueName: \"kubernetes.io/projected/db5059ce-9214-449d-a8d5-1b6ab7447e65-kube-api-access-jw8v8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977044 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.977052 4842 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/174fcd53-40ab-4d19-a317-bc5cd117d2a4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.977106 4842 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: E0202 07:09:21.977150 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts podName:b912e45d-72e7-4250-9757-add1efcfb054 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:23.977135838 +0000 UTC m=+1389.354403750 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts") pod "root-account-create-update-kl9p2" (UID: "b912e45d-72e7-4250-9757-add1efcfb054") : configmap "openstack-scripts" not found Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993309 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993472 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data" (OuterVolumeSpecName: "config-data") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993639 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993901 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws" (OuterVolumeSpecName: "kube-api-access-9lfws") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "kube-api-access-9lfws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:21 crc kubenswrapper[4842]: I0202 07:09:21.993977 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7" (OuterVolumeSpecName: "kube-api-access-58tm7") pod "4850512e-bbc8-468d-94ef-1d1be3b0b49c" (UID: "4850512e-bbc8-468d-94ef-1d1be3b0b49c"). InnerVolumeSpecName "kube-api-access-58tm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.001449 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq" (OuterVolumeSpecName: "kube-api-access-k69gq") pod "1f94c60e-a4fc-4b7d-96cd-367d46a731c4" (UID: "1f94c60e-a4fc-4b7d-96cd-367d46a731c4"). InnerVolumeSpecName "kube-api-access-k69gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.020603 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.028245 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" (UID: "f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.029157 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.036440 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data" (OuterVolumeSpecName: "config-data") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.039848 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.042141 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data" (OuterVolumeSpecName: "config-data") pod "4850512e-bbc8-468d-94ef-1d1be3b0b49c" (UID: "4850512e-bbc8-468d-94ef-1d1be3b0b49c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.045182 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.045859 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data" (OuterVolumeSpecName: "config-data") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.047629 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.047988 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c96a7e1-78c3-449d-9200-735db4ee7086" (UID: "6c96a7e1-78c3-449d-9200-735db4ee7086"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.052556 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.061498 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.069967 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4850512e-bbc8-468d-94ef-1d1be3b0b49c" (UID: "4850512e-bbc8-468d-94ef-1d1be3b0b49c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.071435 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "2e4d672b-cb7a-406d-ab62-12745f300ef0" (UID: "2e4d672b-cb7a-406d-ab62-12745f300ef0"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.072887 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f94c60e-a4fc-4b7d-96cd-367d46a731c4" (UID: "1f94c60e-a4fc-4b7d-96cd-367d46a731c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078576 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078597 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078608 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58tm7\" (UniqueName: \"kubernetes.io/projected/4850512e-bbc8-468d-94ef-1d1be3b0b49c-kube-api-access-58tm7\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078617 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078626 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078638 4842 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e4d672b-cb7a-406d-ab62-12745f300ef0-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078649 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078660 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078672 4842 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078682 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078690 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c96a7e1-78c3-449d-9200-735db4ee7086-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078700 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k69gq\" (UniqueName: \"kubernetes.io/projected/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-kube-api-access-k69gq\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078709 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078718 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4850512e-bbc8-468d-94ef-1d1be3b0b49c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078726 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078735 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078745 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078753 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078762 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lfws\" (UniqueName: \"kubernetes.io/projected/679e6e39-029a-452e-a375-bf0b937e3fbe-kube-api-access-9lfws\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078773 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.078781 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.087108 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.088795 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data" (OuterVolumeSpecName: "config-data") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.095612 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c56025ce-3772-435d-bdba-a4d1ba9d6e2f" (UID: "c56025ce-3772-435d-bdba-a4d1ba9d6e2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.135721 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.141392 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data" (OuterVolumeSpecName: "config-data") pod "1f94c60e-a4fc-4b7d-96cd-367d46a731c4" (UID: "1f94c60e-a4fc-4b7d-96cd-367d46a731c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.145147 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data" (OuterVolumeSpecName: "config-data") pod "679e6e39-029a-452e-a375-bf0b937e3fbe" (UID: "679e6e39-029a-452e-a375-bf0b937e3fbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.145377 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data" (OuterVolumeSpecName: "config-data") pod "34f55116-a518-4f21-8816-6f8232a6f68d" (UID: "34f55116-a518-4f21-8816-6f8232a6f68d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.154417 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb022115-b53a-4ed0-a2a0-b44644dc26a7" (UID: "eb022115-b53a-4ed0-a2a0-b44644dc26a7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.173155 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data" (OuterVolumeSpecName: "config-data") pod "174fcd53-40ab-4d19-a317-bc5cd117d2a4" (UID: "174fcd53-40ab-4d19-a317-bc5cd117d2a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180283 4842 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180316 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34f55116-a518-4f21-8816-6f8232a6f68d-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180325 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180334 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56025ce-3772-435d-bdba-a4d1ba9d6e2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180345 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f94c60e-a4fc-4b7d-96cd-367d46a731c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180353 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180361 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb022115-b53a-4ed0-a2a0-b44644dc26a7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180384 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/174fcd53-40ab-4d19-a317-bc5cd117d2a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.180394 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/679e6e39-029a-452e-a375-bf0b937e3fbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.315935 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:22 crc kubenswrapper[4842]: W0202 07:09:22.345035 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f0d774_dbe6_45d5_9ffa_64383c8be0d7.slice/crio-2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e WatchSource:0}: Error finding container 2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e: Status 404 returned error can't find the container with id 2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.369445 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.482194 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.488116 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") pod \"b912e45d-72e7-4250-9757-add1efcfb054\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.488278 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") pod \"b912e45d-72e7-4250-9757-add1efcfb054\" (UID: \"b912e45d-72e7-4250-9757-add1efcfb054\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.489139 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b912e45d-72e7-4250-9757-add1efcfb054" (UID: "b912e45d-72e7-4250-9757-add1efcfb054"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.498824 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6" (OuterVolumeSpecName: "kube-api-access-wz2n6") pod "b912e45d-72e7-4250-9757-add1efcfb054" (UID: "b912e45d-72e7-4250-9757-add1efcfb054"). InnerVolumeSpecName "kube-api-access-wz2n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.512842 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.514066 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.515864 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.515891 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.558795 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1f94c60e-a4fc-4b7d-96cd-367d46a731c4","Type":"ContainerDied","Data":"95e75a79dbca9de8ff0edaf83bbf9a981efefb176ab75feebb5919ac4f34c81f"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.558849 4842 scope.go:117] "RemoveContainer" containerID="aa3abfa94e116973782248416ac6de3799758150d193f7dbb95e6a13e34381cc" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.558947 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.564805 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"4850512e-bbc8-468d-94ef-1d1be3b0b49c","Type":"ContainerDied","Data":"f8175b6df5dfbdeb4f2b96118c96bb8462df0286a53b3bdcaea8cf46054c0053"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.564878 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.579519 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kl9p2" event={"ID":"b912e45d-72e7-4250-9757-add1efcfb054","Type":"ContainerDied","Data":"c436c98ac030592508317571235d4b580f2fca45d60bf44a940ecdb59f089266"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.579627 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kl9p2" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.583103 4842 scope.go:117] "RemoveContainer" containerID="b02a597eaa6f312a54cab57cb22a7ba5718d1a52db99c582f4e0031ffecbffc2" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589290 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589322 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589402 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589432 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589452 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589524 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589545 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.589693 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") pod \"709c39fb-802f-4690-89f6-41a717e7244c\" (UID: \"709c39fb-802f-4690-89f6-41a717e7244c\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.590096 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b912e45d-72e7-4250-9757-add1efcfb054-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.590131 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz2n6\" (UniqueName: \"kubernetes.io/projected/b912e45d-72e7-4250-9757-add1efcfb054-kube-api-access-wz2n6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592113 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592080 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592694 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.592814 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.601372 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"174fcd53-40ab-4d19-a317-bc5cd117d2a4","Type":"ContainerDied","Data":"dc072634ce1fdc7d7f270a2d47917083559fd131ffec946966f43f1f6581f8f4"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.601465 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.606686 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" event={"ID":"679e6e39-029a-452e-a375-bf0b937e3fbe","Type":"ContainerDied","Data":"eb1c879ce0521868ffea7d5ca4ba1e741e4b7c55bb4a6410da53f5413323bc74"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.606797 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77c4859bf4-qzmpm" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.608534 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6064786a-fa53-47a7-88ee-384cf70a86c6/ovn-northd/0.log" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.609413 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.620268 4842 scope.go:117] "RemoveContainer" containerID="13000d6307279a8f1879b7fd7be84a407943a9cc3066fff0cf9a626a1678f240" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.620574 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerStarted","Data":"2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.625129 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6" (OuterVolumeSpecName: "kube-api-access-848c6") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "kube-api-access-848c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.625600 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631118 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_6064786a-fa53-47a7-88ee-384cf70a86c6/ovn-northd/0.log" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631158 4842 generic.go:334] "Generic (PLEG): container finished" podID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" exitCode=139 Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631581 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"6064786a-fa53-47a7-88ee-384cf70a86c6","Type":"ContainerDied","Data":"6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.631715 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.646465 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.655306 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.660070 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-57cc9f4749-jxzrq" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.660485 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-57cc9f4749-jxzrq" event={"ID":"f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd","Type":"ContainerDied","Data":"1a2fdbaaf7cba0dd3058c59daa47fefc2d3624684698fe684e8a50e2db075890"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.680976 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691172 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691199 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-848c6\" (UniqueName: \"kubernetes.io/projected/709c39fb-802f-4690-89f6-41a717e7244c-kube-api-access-848c6\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691209 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/709c39fb-802f-4690-89f6-41a717e7244c-config-data-generated\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691230 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-config-data-default\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691238 4842 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-kolla-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.691247 4842 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/709c39fb-802f-4690-89f6-41a717e7244c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.698867 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700473 4842 generic.go:334] "Generic (PLEG): container finished" podID="709c39fb-802f-4690-89f6-41a717e7244c" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" exitCode=0 Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700554 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerDied","Data":"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700615 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700684 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b5c67fdbd-zsx96" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700761 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2348-account-create-update-j8g5r" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700791 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8e42-account-create-update-pssf7" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700821 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc5c967fd-w6ljx" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700843 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-85ce-account-create-update-szhp5" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700868 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700895 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-716d-account-create-update-x4f2v" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700922 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700948 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700975 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bfdd-account-create-update-z7blt" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.700975 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"709c39fb-802f-4690-89f6-41a717e7244c","Type":"ContainerDied","Data":"b0c718acbfc7b29da36fd02c7d5b494cfe5ffb0fab4eeaa9d4ac6e1362b5ae3e"} Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.701059 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.710002 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "709c39fb-802f-4690-89f6-41a717e7244c" (UID: "709c39fb-802f-4690-89f6-41a717e7244c"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.711863 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.721875 4842 scope.go:117] "RemoveContainer" containerID="bad70e2dba666c009e7972d01ff11c1b18b18e47b07343dcd24db229c935fcc3" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.739102 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.744543 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kl9p2"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.792704 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.792762 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793176 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts" (OuterVolumeSpecName: "scripts") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793314 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793334 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793349 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config" (OuterVolumeSpecName: "config") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793598 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793666 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793693 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.793990 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") pod \"6064786a-fa53-47a7-88ee-384cf70a86c6\" (UID: \"6064786a-fa53-47a7-88ee-384cf70a86c6\") " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794341 4842 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/709c39fb-802f-4690-89f6-41a717e7244c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794352 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-rundir\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794369 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794378 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.794388 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6064786a-fa53-47a7-88ee-384cf70a86c6-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.798481 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq" (OuterVolumeSpecName: "kube-api-access-4qdwq") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "kube-api-access-4qdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.826552 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.838410 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.838601 4842 scope.go:117] "RemoveContainer" containerID="4bae417047baf6bf846e8de15338ba7207499db97e8d990c0e70145588c621ef" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.847135 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.857243 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-77c4859bf4-qzmpm"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.869686 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.879524 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.891548 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.895933 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.895958 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdwq\" (UniqueName: \"kubernetes.io/projected/6064786a-fa53-47a7-88ee-384cf70a86c6-kube-api-access-4qdwq\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.895968 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.896025 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Feb 02 07:09:22 crc kubenswrapper[4842]: E0202 07:09:22.896072 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data podName:2b2ca532-dbbc-4148-8d2f-fc474685f0bd nodeName:}" failed. No retries permitted until 2026-02-02 07:09:30.896057314 +0000 UTC m=+1396.273325226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data") pod "rabbitmq-server-0" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd") : configmap "rabbitmq-config-data" not found Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.911367 4842 scope.go:117] "RemoveContainer" containerID="b1e2b0db828452447ced8622fe6dcff41213b22d66d8c13c96258aefe2a29db1" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.912931 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-57cc9f4749-jxzrq"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.917119 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.929988 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.938501 4842 scope.go:117] "RemoveContainer" containerID="454fd5e306d51498a984d5077e2446e7c6cf9f4c21170f227c52179104c4a621" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.941388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "6064786a-fa53-47a7-88ee-384cf70a86c6" (UID: "6064786a-fa53-47a7-88ee-384cf70a86c6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.945839 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5b5c67fdbd-zsx96"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.969810 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.975820 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.986757 4842 scope.go:117] "RemoveContainer" containerID="aee85aee5516dd19e05e53144d572bf0aa1bff0b09c36ebb0b91fd8f463420c6" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.997417 4842 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:22 crc kubenswrapper[4842]: I0202 07:09:22.997440 4842 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/6064786a-fa53-47a7-88ee-384cf70a86c6-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.031733 4842 scope.go:117] "RemoveContainer" containerID="5a24327ba4517226f20e20f0a45585d27dd9a1490c6050d591f1638384be7d6d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.064400 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.076139 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2348-account-create-update-j8g5r"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.077316 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.081367 4842 scope.go:117] "RemoveContainer" containerID="e96862cf77fa128f12f3b9982dfad78848395bebaf2c0c3ff7a1cca181e725f0" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.088935 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.098721 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.127343 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.148436 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.158962 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-85ce-account-create-update-szhp5"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.173988 4842 scope.go:117] "RemoveContainer" containerID="6b0de6a9b1a36bc3d2910cbd8bed0ec4d6b0a971b7c05c08ccf5a0c3fa8afa6c" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.175109 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.181205 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.194321 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-716d-account-create-update-x4f2v"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.208467 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.208673 4842 scope.go:117] "RemoveContainer" containerID="36bc22b70997be0e1a4613b0f92eaab2935de0d49964ada65b21f18ae7b1478b" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.239412 4842 scope.go:117] "RemoveContainer" containerID="2a1ff124f28b987212a2f7ed64a1bf208d310f3e9f13e80b4572c2dce5f8a5f9" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.253721 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.264428 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.265431 4842 scope.go:117] "RemoveContainer" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.269852 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cc5c967fd-w6ljx"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.286925 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.293124 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bfdd-account-create-update-z7blt"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.293455 4842 scope.go:117] "RemoveContainer" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.297938 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312311 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312396 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312428 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312469 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312597 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312643 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312694 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.312741 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") pod \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\" (UID: \"7343dd67-a085-4da9-8d79-f25ea1e20ca6\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.315159 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.320425 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8" (OuterVolumeSpecName: "kube-api-access-457v8") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "kube-api-access-457v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.320646 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts" (OuterVolumeSpecName: "scripts") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.325955 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.333981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.334743 4842 scope.go:117] "RemoveContainer" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.335140 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c\": container with ID starting with c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c not found: ID does not exist" containerID="c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.335180 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c"} err="failed to get container status \"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c\": rpc error: code = NotFound desc = could not find container \"c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c\": container with ID starting with c560cf8ca46605a269f576b719a4cf3ca939b8e2744573792764df19d7522c8c not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.335242 4842 scope.go:117] "RemoveContainer" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.336291 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d\": container with ID starting with 97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d not found: ID does not exist" containerID="97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.336313 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d"} err="failed to get container status \"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d\": rpc error: code = NotFound desc = could not find container \"97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d\": container with ID starting with 97ba3917d42f55e5202587bc21acaf8c4c98f2515894b36ef8743fca56ae4a0d not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.339943 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.351575 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.352601 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8e42-account-create-update-pssf7"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.370450 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.375529 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.376116 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.382563 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data" (OuterVolumeSpecName: "config-data") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.399591 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7343dd67-a085-4da9-8d79-f25ea1e20ca6" (UID: "7343dd67-a085-4da9-8d79-f25ea1e20ca6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414406 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414437 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414450 4842 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414461 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-457v8\" (UniqueName: \"kubernetes.io/projected/7343dd67-a085-4da9-8d79-f25ea1e20ca6-kube-api-access-457v8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414470 4842 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414478 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414486 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.414494 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7343dd67-a085-4da9-8d79-f25ea1e20ca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.459594 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" path="/var/lib/kubelet/pods/174fcd53-40ab-4d19-a317-bc5cd117d2a4/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.460518 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" path="/var/lib/kubelet/pods/1f94c60e-a4fc-4b7d-96cd-367d46a731c4/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.461193 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" path="/var/lib/kubelet/pods/25609b1c-e1e9-4633-b3e3-93bd2f4396de/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.462847 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" path="/var/lib/kubelet/pods/2e4d672b-cb7a-406d-ab62-12745f300ef0/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.463760 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" path="/var/lib/kubelet/pods/34f55116-a518-4f21-8816-6f8232a6f68d/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.465279 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" path="/var/lib/kubelet/pods/4850512e-bbc8-468d-94ef-1d1be3b0b49c/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.465983 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" path="/var/lib/kubelet/pods/54aa018a-3e7e-4c95-9c1d-387543ed5af0/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.466796 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" path="/var/lib/kubelet/pods/6064786a-fa53-47a7-88ee-384cf70a86c6/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.468210 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" path="/var/lib/kubelet/pods/679e6e39-029a-452e-a375-bf0b937e3fbe/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.469211 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" path="/var/lib/kubelet/pods/6b11cfdf-ed7a-48ce-97eb-e03cd6be314c/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.470028 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" path="/var/lib/kubelet/pods/6c96a7e1-78c3-449d-9200-735db4ee7086/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.472979 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709c39fb-802f-4690-89f6-41a717e7244c" path="/var/lib/kubelet/pods/709c39fb-802f-4690-89f6-41a717e7244c/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.473742 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d5e0a1-8df4-4db1-aaf8-0d253163a522" path="/var/lib/kubelet/pods/79d5e0a1-8df4-4db1-aaf8-0d253163a522/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.474182 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e3e639-93f4-48d1-8a2f-89e48bcc5f1d" path="/var/lib/kubelet/pods/81e3e639-93f4-48d1-8a2f-89e48bcc5f1d/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.474627 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90821e80-1367-4cf6-8087-fb83507223ec" path="/var/lib/kubelet/pods/90821e80-1367-4cf6-8087-fb83507223ec/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.475583 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92090cd2-6d30-4aec-81a2-f7d41c40b52d" path="/var/lib/kubelet/pods/92090cd2-6d30-4aec-81a2-f7d41c40b52d/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.476007 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b912e45d-72e7-4250-9757-add1efcfb054" path="/var/lib/kubelet/pods/b912e45d-72e7-4250-9757-add1efcfb054/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.476714 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" path="/var/lib/kubelet/pods/c56025ce-3772-435d-bdba-a4d1ba9d6e2f/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.477274 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db5059ce-9214-449d-a8d5-1b6ab7447e65" path="/var/lib/kubelet/pods/db5059ce-9214-449d-a8d5-1b6ab7447e65/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.478193 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91519e6-bf55-4c08-8274-1d8a59f1ff52" path="/var/lib/kubelet/pods/e91519e6-bf55-4c08-8274-1d8a59f1ff52/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.478694 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" path="/var/lib/kubelet/pods/eb022115-b53a-4ed0-a2a0-b44644dc26a7/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.479636 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" path="/var/lib/kubelet/pods/f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd/volumes" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.689424 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.714770 4842 generic.go:334] "Generic (PLEG): container finished" podID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerID="3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.714833 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerDied","Data":"3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.717727 4842 generic.go:334] "Generic (PLEG): container finished" podID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.717921 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.718163 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerDied","Data":"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.718239 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2b2ca532-dbbc-4148-8d2f-fc474685f0bd","Type":"ContainerDied","Data":"63d0cfdfa17eb71cf318213bce11d52e23291a7b7ab17f960100e6c0aabd0b83"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.718266 4842 scope.go:117] "RemoveContainer" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.726573 4842 generic.go:334] "Generic (PLEG): container finished" podID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.726651 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.736851 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.756727 4842 generic.go:334] "Generic (PLEG): container finished" podID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" exitCode=0 Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.756793 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerDied","Data":"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.757139 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cd7d86b6c-rcdjq" event={"ID":"7343dd67-a085-4da9-8d79-f25ea1e20ca6","Type":"ContainerDied","Data":"0a8707912ffa5b95a33e852a86d3ad76fb5ed5f7a33153be252e8d6c15cbbb8d"} Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.756819 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cd7d86b6c-rcdjq" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.763097 4842 scope.go:117] "RemoveContainer" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.782438 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.790276 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cd7d86b6c-rcdjq"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.798739 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.799175 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.800707 4842 scope.go:117] "RemoveContainer" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.800793 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.800817 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.801481 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.801571 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d\": container with ID starting with 384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d not found: ID does not exist" containerID="384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.801611 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d"} err="failed to get container status \"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d\": rpc error: code = NotFound desc = could not find container \"384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d\": container with ID starting with 384f2467730e80d894550b124ee5d4d50ba8cf40b6a9c5e38ab8a7bf9548ea2d not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.801635 4842 scope.go:117] "RemoveContainer" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.804508 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b\": container with ID starting with 6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b not found: ID does not exist" containerID="6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.804538 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b"} err="failed to get container status \"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b\": rpc error: code = NotFound desc = could not find container \"6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b\": container with ID starting with 6c31731dd55c0106a8a51f84c9feb372cb01a4a0f209022835cbd8f0c40ce80b not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.804558 4842 scope.go:117] "RemoveContainer" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.804578 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.807939 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.807998 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.819809 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.819956 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820067 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820138 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820159 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820186 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820234 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820269 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820297 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820320 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") pod \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\" (UID: \"2b2ca532-dbbc-4148-8d2f-fc474685f0bd\") " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.820617 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.821052 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.821069 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.829329 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.829338 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.830241 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.832727 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4" (OuterVolumeSpecName: "kube-api-access-9ttm4") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "kube-api-access-9ttm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.833345 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.834731 4842 scope.go:117] "RemoveContainer" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" Feb 02 07:09:23 crc kubenswrapper[4842]: E0202 07:09:23.838402 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765\": container with ID starting with 4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765 not found: ID does not exist" containerID="4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.838438 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765"} err="failed to get container status \"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765\": rpc error: code = NotFound desc = could not find container \"4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765\": container with ID starting with 4e6d71c03ef27703f095692cfb9e2c5680467263aa934bc2fe4e56b094edd765 not found: ID does not exist" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.850040 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data" (OuterVolumeSpecName: "config-data") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.864330 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.894710 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6684555597-gjtgz" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9696/\": dial tcp 10.217.0.168:9696: connect: connection refused" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.917878 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2b2ca532-dbbc-4148-8d2f-fc474685f0bd" (UID: "2b2ca532-dbbc-4148-8d2f-fc474685f0bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921766 4842 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921792 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921803 4842 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921814 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ttm4\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-kube-api-access-9ttm4\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921827 4842 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921853 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921863 4842 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921876 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921886 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921896 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.921905 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b2ca532-dbbc-4148-8d2f-fc474685f0bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:23 crc kubenswrapper[4842]: I0202 07:09:23.943698 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.003931 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.024706 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: E0202 07:09:24.024805 4842 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:24 crc kubenswrapper[4842]: E0202 07:09:24.025290 4842 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data podName:441d47f7-e5dd-456f-b6fa-10a642be6742 nodeName:}" failed. No retries permitted until 2026-02-02 07:09:32.025274217 +0000 UTC m=+1397.402542129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data") pod "rabbitmq-cell1-server-0" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742") : configmap "rabbitmq-cell1-config-data" not found Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.056518 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.067091 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125343 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125422 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125447 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125484 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125505 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125564 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125594 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125628 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125671 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.125700 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") pod \"441d47f7-e5dd-456f-b6fa-10a642be6742\" (UID: \"441d47f7-e5dd-456f-b6fa-10a642be6742\") " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.126199 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.126454 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.127776 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.129402 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.129899 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.130425 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info" (OuterVolumeSpecName: "pod-info") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.130697 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.144034 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data" (OuterVolumeSpecName: "config-data") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.146670 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl" (OuterVolumeSpecName: "kube-api-access-9n8dl") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "kube-api-access-9n8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.161166 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf" (OuterVolumeSpecName: "server-conf") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.201482 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "441d47f7-e5dd-456f-b6fa-10a642be6742" (UID: "441d47f7-e5dd-456f-b6fa-10a642be6742"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227821 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227848 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227860 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227870 4842 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/441d47f7-e5dd-456f-b6fa-10a642be6742-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227879 4842 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/441d47f7-e5dd-456f-b6fa-10a642be6742-pod-info\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227886 4842 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-server-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227894 4842 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227903 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n8dl\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-kube-api-access-9n8dl\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227933 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227942 4842 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/441d47f7-e5dd-456f-b6fa-10a642be6742-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.227950 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/441d47f7-e5dd-456f-b6fa-10a642be6742-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.250890 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.259333 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": dial tcp 10.217.0.170:8080: i/o timeout" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.259584 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-659598d599-lpzh5" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.170:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.329460 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.782279 4842 generic.go:334] "Generic (PLEG): container finished" podID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" exitCode=0 Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.783405 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerDied","Data":"75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4"} Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.791775 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerStarted","Data":"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37"} Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.795541 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"441d47f7-e5dd-456f-b6fa-10a642be6742","Type":"ContainerDied","Data":"f125ead6f6ca269886544c12b159c6f5309a094d04f426e2da08b9aef5bc513c"} Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.795595 4842 scope.go:117] "RemoveContainer" containerID="3913ec835fcef00ab7ba5cfa0bb102b1d808857fbee96be0da99ede67f9672b5" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.795760 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.853422 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.855281 4842 scope.go:117] "RemoveContainer" containerID="15488c5f14bed733c354b136f5f9b0303d01f42120de21fa2a655d19a2d681ef" Feb 02 07:09:24 crc kubenswrapper[4842]: I0202 07:09:24.859912 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.179688 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.244026 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") pod \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.244152 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") pod \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.244253 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") pod \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\" (UID: \"cbda1f81-b862-4ee7-84ce-590c353e4d5b\") " Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.249012 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj" (OuterVolumeSpecName: "kube-api-access-zf5pj") pod "cbda1f81-b862-4ee7-84ce-590c353e4d5b" (UID: "cbda1f81-b862-4ee7-84ce-590c353e4d5b"). InnerVolumeSpecName "kube-api-access-zf5pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.264399 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbda1f81-b862-4ee7-84ce-590c353e4d5b" (UID: "cbda1f81-b862-4ee7-84ce-590c353e4d5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.266173 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data" (OuterVolumeSpecName: "config-data") pod "cbda1f81-b862-4ee7-84ce-590c353e4d5b" (UID: "cbda1f81-b862-4ee7-84ce-590c353e4d5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.345861 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf5pj\" (UniqueName: \"kubernetes.io/projected/cbda1f81-b862-4ee7-84ce-590c353e4d5b-kube-api-access-zf5pj\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.345899 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.345909 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbda1f81-b862-4ee7-84ce-590c353e4d5b-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.441478 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" path="/var/lib/kubelet/pods/2b2ca532-dbbc-4148-8d2f-fc474685f0bd/volumes" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.442285 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" path="/var/lib/kubelet/pods/441d47f7-e5dd-456f-b6fa-10a642be6742/volumes" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.443266 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" path="/var/lib/kubelet/pods/7343dd67-a085-4da9-8d79-f25ea1e20ca6/volumes" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.812982 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"cbda1f81-b862-4ee7-84ce-590c353e4d5b","Type":"ContainerDied","Data":"85e914a150668613743c13aeff477024d4b0461bd9157d8138fdfcfd7144ee67"} Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.813040 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.813049 4842 scope.go:117] "RemoveContainer" containerID="75df0dcbbbe53a8b55947d6010ee6f966cc34b098ea07e3b90fcd36b98f46fc4" Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.816172 4842 generic.go:334] "Generic (PLEG): container finished" podID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" exitCode=0 Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.816319 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37"} Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.876266 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:09:25 crc kubenswrapper[4842]: I0202 07:09:25.883112 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.620610 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.620642 4842 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc5c967fd-w6ljx" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.828615 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerStarted","Data":"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f"} Feb 02 07:09:26 crc kubenswrapper[4842]: I0202 07:09:26.851714 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zllm7" podStartSLOduration=6.355325003 podStartE2EDuration="8.85168316s" podCreationTimestamp="2026-02-02 07:09:18 +0000 UTC" firstStartedPulling="2026-02-02 07:09:23.736595709 +0000 UTC m=+1389.113863621" lastFinishedPulling="2026-02-02 07:09:26.232953856 +0000 UTC m=+1391.610221778" observedRunningTime="2026-02-02 07:09:26.846968159 +0000 UTC m=+1392.224236111" watchObservedRunningTime="2026-02-02 07:09:26.85168316 +0000 UTC m=+1392.228951112" Feb 02 07:09:27 crc kubenswrapper[4842]: I0202 07:09:27.442042 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" path="/var/lib/kubelet/pods/cbda1f81-b862-4ee7-84ce-590c353e4d5b/volumes" Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.796935 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.797337 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.797652 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.797706 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.801303 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.803144 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.805328 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:28 crc kubenswrapper[4842]: E0202 07:09:28.805363 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:29 crc kubenswrapper[4842]: I0202 07:09:29.750264 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:29 crc kubenswrapper[4842]: I0202 07:09:29.750321 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:30 crc kubenswrapper[4842]: I0202 07:09:30.816373 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zllm7" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" probeResult="failure" output=< Feb 02 07:09:30 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:09:30 crc kubenswrapper[4842]: > Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.798586 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799197 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799416 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799541 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.799583 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.801347 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.802437 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:33 crc kubenswrapper[4842]: E0202 07:09:33.802476 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.800404 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.800729 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.802560 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.802920 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.802969 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.804316 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.808336 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:38 crc kubenswrapper[4842]: E0202 07:09:38.808394 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:38 crc kubenswrapper[4842]: I0202 07:09:38.953993 4842 generic.go:334] "Generic (PLEG): container finished" podID="953bf671-ca79-4208-9bab-672dc079dd82" containerID="679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca" exitCode=0 Feb 02 07:09:38 crc kubenswrapper[4842]: I0202 07:09:38.954050 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerDied","Data":"679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca"} Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.154896 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274013 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274087 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274120 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274180 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274287 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274323 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.274350 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") pod \"953bf671-ca79-4208-9bab-672dc079dd82\" (UID: \"953bf671-ca79-4208-9bab-672dc079dd82\") " Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.281373 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.281766 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647" (OuterVolumeSpecName: "kube-api-access-wj647") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "kube-api-access-wj647". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.314850 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.337798 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.339880 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.344344 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config" (OuterVolumeSpecName: "config") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.370921 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "953bf671-ca79-4208-9bab-672dc079dd82" (UID: "953bf671-ca79-4208-9bab-672dc079dd82"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.375951 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.375984 4842 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.375994 4842 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376004 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj647\" (UniqueName: \"kubernetes.io/projected/953bf671-ca79-4208-9bab-672dc079dd82-kube-api-access-wj647\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376015 4842 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376024 4842 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.376032 4842 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/953bf671-ca79-4208-9bab-672dc079dd82-config\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.813321 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.964846 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6684555597-gjtgz" event={"ID":"953bf671-ca79-4208-9bab-672dc079dd82","Type":"ContainerDied","Data":"642e7ab1c818fa3e0857124b890ed7f6355271588ac21bdb99c64d978b7374b0"} Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.964896 4842 scope.go:117] "RemoveContainer" containerID="69048ee01a49fa4ed888b0c135134e06af01f907b56780330edbc72e09136e83" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.964928 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6684555597-gjtgz" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.966733 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:39 crc kubenswrapper[4842]: I0202 07:09:39.996177 4842 scope.go:117] "RemoveContainer" containerID="679d0126323f1cafc695474001597b9d37c1a23ba5158a00e7f240fffa003eca" Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.020609 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.030534 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6684555597-gjtgz"] Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.058104 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:40 crc kubenswrapper[4842]: I0202 07:09:40.978326 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zllm7" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" containerID="cri-o://1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" gracePeriod=2 Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.451839 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953bf671-ca79-4208-9bab-672dc079dd82" path="/var/lib/kubelet/pods/953bf671-ca79-4208-9bab-672dc079dd82/volumes" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.559546 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.741486 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") pod \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.741848 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") pod \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.741969 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") pod \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\" (UID: \"02f0d774-dbe6-45d5-9ffa-64383c8be0d7\") " Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.742488 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities" (OuterVolumeSpecName: "utilities") pod "02f0d774-dbe6-45d5-9ffa-64383c8be0d7" (UID: "02f0d774-dbe6-45d5-9ffa-64383c8be0d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.747833 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8" (OuterVolumeSpecName: "kube-api-access-f45s8") pod "02f0d774-dbe6-45d5-9ffa-64383c8be0d7" (UID: "02f0d774-dbe6-45d5-9ffa-64383c8be0d7"). InnerVolumeSpecName "kube-api-access-f45s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.843899 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.843947 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f45s8\" (UniqueName: \"kubernetes.io/projected/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-kube-api-access-f45s8\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.924746 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02f0d774-dbe6-45d5-9ffa-64383c8be0d7" (UID: "02f0d774-dbe6-45d5-9ffa-64383c8be0d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.945788 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f0d774-dbe6-45d5-9ffa-64383c8be0d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994070 4842 generic.go:334] "Generic (PLEG): container finished" podID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" exitCode=0 Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994148 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f"} Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994200 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zllm7" event={"ID":"02f0d774-dbe6-45d5-9ffa-64383c8be0d7","Type":"ContainerDied","Data":"2cbf9ae96d96235341d31a68b4251a05222974fd5545b2aa050455da09a3394e"} Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994269 4842 scope.go:117] "RemoveContainer" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" Feb 02 07:09:41 crc kubenswrapper[4842]: I0202 07:09:41.994360 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zllm7" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.049730 4842 scope.go:117] "RemoveContainer" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.050433 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.058973 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zllm7"] Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.079120 4842 scope.go:117] "RemoveContainer" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.104478 4842 scope.go:117] "RemoveContainer" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" Feb 02 07:09:42 crc kubenswrapper[4842]: E0202 07:09:42.105117 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f\": container with ID starting with 1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f not found: ID does not exist" containerID="1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105154 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f"} err="failed to get container status \"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f\": rpc error: code = NotFound desc = could not find container \"1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f\": container with ID starting with 1fc31936ea8e9f9b875ebd7857ad04e6102b7866b0c1de09c58a29f7919b073f not found: ID does not exist" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105184 4842 scope.go:117] "RemoveContainer" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" Feb 02 07:09:42 crc kubenswrapper[4842]: E0202 07:09:42.105801 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37\": container with ID starting with f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37 not found: ID does not exist" containerID="f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105877 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37"} err="failed to get container status \"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37\": rpc error: code = NotFound desc = could not find container \"f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37\": container with ID starting with f2cb985a30fbcf047b72d30936225b42c521d9d6aa877867ab68fc50e1baca37 not found: ID does not exist" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.105913 4842 scope.go:117] "RemoveContainer" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" Feb 02 07:09:42 crc kubenswrapper[4842]: E0202 07:09:42.106414 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869\": container with ID starting with b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869 not found: ID does not exist" containerID="b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.106451 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869"} err="failed to get container status \"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869\": rpc error: code = NotFound desc = could not find container \"b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869\": container with ID starting with b6fbbeefaf6c662fb9dc489fefb6fc893e73cc0665f964e826ce195432515869 not found: ID does not exist" Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.145764 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:09:42 crc kubenswrapper[4842]: I0202 07:09:42.145826 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:09:43 crc kubenswrapper[4842]: I0202 07:09:43.449538 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" path="/var/lib/kubelet/pods/02f0d774-dbe6-45d5-9ffa-64383c8be0d7/volumes" Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.797528 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.798247 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.798711 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.798789 4842 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.799581 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.801457 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.803453 4842 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Feb 02 07:09:43 crc kubenswrapper[4842]: E0202 07:09:43.803526 4842 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-vctt8" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.057356 4842 generic.go:334] "Generic (PLEG): container finished" podID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerID="a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060" exitCode=137 Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.057454 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060"} Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.061321 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vctt8_ce6d1a00-c27b-418e-afa9-01c8c7802127/ovs-vswitchd/0.log" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.062829 4842 generic.go:334] "Generic (PLEG): container finished" podID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" exitCode=137 Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.062871 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e"} Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.213083 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332192 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332300 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332328 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332391 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332424 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332441 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") pod \"928a8c7e-d835-4795-8197-1861e4fd8f83\" (UID: \"928a8c7e-d835-4795-8197-1861e4fd8f83\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.332963 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock" (OuterVolumeSpecName: "lock") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.333137 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache" (OuterVolumeSpecName: "cache") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.337429 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.337992 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.339043 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87" (OuterVolumeSpecName: "kube-api-access-t9t87") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "kube-api-access-t9t87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.342574 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vctt8_ce6d1a00-c27b-418e-afa9-01c8c7802127/ovs-vswitchd/0.log" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.343600 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433113 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433155 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433240 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log" (OuterVolumeSpecName: "var-log") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433284 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433290 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run" (OuterVolumeSpecName: "var-run") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433330 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433407 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433440 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") pod \"ce6d1a00-c27b-418e-afa9-01c8c7802127\" (UID: \"ce6d1a00-c27b-418e-afa9-01c8c7802127\") " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433481 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib" (OuterVolumeSpecName: "var-lib") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433509 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433736 4842 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-lib\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433763 4842 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433776 4842 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-lock\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433788 4842 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-etc-ovs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433799 4842 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-log\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433811 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9t87\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-kube-api-access-t9t87\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433823 4842 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce6d1a00-c27b-418e-afa9-01c8c7802127-var-run\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433834 4842 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/928a8c7e-d835-4795-8197-1861e4fd8f83-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.433845 4842 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/928a8c7e-d835-4795-8197-1861e4fd8f83-cache\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.434576 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts" (OuterVolumeSpecName: "scripts") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.436119 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd" (OuterVolumeSpecName: "kube-api-access-6lfhd") pod "ce6d1a00-c27b-418e-afa9-01c8c7802127" (UID: "ce6d1a00-c27b-418e-afa9-01c8c7802127"). InnerVolumeSpecName "kube-api-access-6lfhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.452527 4842 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.535147 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lfhd\" (UniqueName: \"kubernetes.io/projected/ce6d1a00-c27b-418e-afa9-01c8c7802127-kube-api-access-6lfhd\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.535190 4842 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.535211 4842 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce6d1a00-c27b-418e-afa9-01c8c7802127-scripts\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.631584 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "928a8c7e-d835-4795-8197-1861e4fd8f83" (UID: "928a8c7e-d835-4795-8197-1861e4fd8f83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:47 crc kubenswrapper[4842]: I0202 07:09:47.636344 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/928a8c7e-d835-4795-8197-1861e4fd8f83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.073600 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-vctt8_ce6d1a00-c27b-418e-afa9-01c8c7802127/ovs-vswitchd/0.log" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.075156 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-vctt8" event={"ID":"ce6d1a00-c27b-418e-afa9-01c8c7802127","Type":"ContainerDied","Data":"20790a3e9ff5cd63d4fa516d28e246cafad534d4d8104c6a1f16eb5a3c586904"} Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.075204 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-vctt8" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.075265 4842 scope.go:117] "RemoveContainer" containerID="3d012027dc77ec74c67db1701cffcf6155ff207cb1c71ca4a1718a0c29fa0d3e" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.091589 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"928a8c7e-d835-4795-8197-1861e4fd8f83","Type":"ContainerDied","Data":"ab889a1e60a176a5157cbf2492af02320a93e4b8f19cc77b84445a221a0d1b90"} Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.091711 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.096652 4842 scope.go:117] "RemoveContainer" containerID="a70ae241fd61d79ed259a10e194d4b360436ccd9fe075ef0a7771cbd8334c07c" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.100149 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.116409 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-vctt8"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.138244 4842 scope.go:117] "RemoveContainer" containerID="0e2b21c37cc6f772bef7c4e80d3e6f156ca0d9772f52dfdc03a69fbc57f8dd8b" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.148865 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.154405 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.163576 4842 scope.go:117] "RemoveContainer" containerID="a0ba4c6bbf6b05d401f52ab663d9f47cbde0cebb5dfcb8997ff120cffdd05060" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.188651 4842 scope.go:117] "RemoveContainer" containerID="419e27de3686d1a75400d18f391cbe54519868631357cce324a86c057a1dbbfe" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.204860 4842 scope.go:117] "RemoveContainer" containerID="c3ceba27f85cf9e18b4c96e9c35e3e830a3840e245ff37876679745418c599df" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.219498 4842 scope.go:117] "RemoveContainer" containerID="11c87109b1d73f0312d44a7a194b500b7f7e551073a65468bc291891955fd1d1" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.234767 4842 scope.go:117] "RemoveContainer" containerID="3accf74226bf0263e16fdcc906f97a58d41768cb604252689a8c7a9fac50f04f" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.252936 4842 scope.go:117] "RemoveContainer" containerID="a6f0be0e71192334da01f394f7e0075f3ff472a60d737f40449f0c7c56b45801" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.276304 4842 scope.go:117] "RemoveContainer" containerID="5fe6ac9847ee5629c3a3a2ccb929b05946534e86d95fae65cd97cbab654c7391" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.292967 4842 scope.go:117] "RemoveContainer" containerID="94a480917554fbdc9c94fdc240db04a25556fac19911eb5945a6838a7169e5f3" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.319100 4842 scope.go:117] "RemoveContainer" containerID="98d05e29848a090df093dcb34910845ebd22086e918c4b510210550b0fcd98f9" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.337502 4842 scope.go:117] "RemoveContainer" containerID="84a64916ad5a870dd2730290e371bd4ee7a327af7bfa716ae7b3457657e3b792" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.357098 4842 scope.go:117] "RemoveContainer" containerID="78ea2470e0bb66602235ee6f953b1cb50c60bbf2dda3d60aa9ded3436730161c" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.378722 4842 scope.go:117] "RemoveContainer" containerID="1864c37f5464bef32be4591740d73c6be777716e778338b57e2c23f30b098973" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.398830 4842 scope.go:117] "RemoveContainer" containerID="81e3b07657ef3f1d8e0c81f783b14b3167b42779f998c664f2c184857a6ffc8b" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.453130 4842 scope.go:117] "RemoveContainer" containerID="0579b6675bbca573212a34273ea354bc485d0dead5d30e277230eaf0ce0b9594" Feb 02 07:09:48 crc kubenswrapper[4842]: I0202 07:09:48.480816 4842 scope.go:117] "RemoveContainer" containerID="496f7c8f3a8e1190f069f9d123dad4f03c5ddc2c339a3a530d938ce75113f766" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.039974 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.107261 4842 generic.go:334] "Generic (PLEG): container finished" podID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerID="dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6" exitCode=137 Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.107386 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerDied","Data":"dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6"} Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109031 4842 generic.go:334] "Generic (PLEG): container finished" podID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" exitCode=137 Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109077 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerDied","Data":"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081"} Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109097 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" event={"ID":"748756c2-ee60-42ce-835e-bfaa7007d7ac","Type":"ContainerDied","Data":"09ed8d05d994b4f10b7eef605b2f606beee05a7896873233e85ba84f7bd5475e"} Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109116 4842 scope.go:117] "RemoveContainer" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.109235 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-687b99dfd8-skrq6" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.135863 4842 scope.go:117] "RemoveContainer" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.150748 4842 scope.go:117] "RemoveContainer" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" Feb 02 07:09:49 crc kubenswrapper[4842]: E0202 07:09:49.151441 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081\": container with ID starting with b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081 not found: ID does not exist" containerID="b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.151472 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081"} err="failed to get container status \"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081\": rpc error: code = NotFound desc = could not find container \"b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081\": container with ID starting with b52b688787922560d30dfe4b0b956a05a57d07b8c6d9016ccf7d37fd8f711081 not found: ID does not exist" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.151492 4842 scope.go:117] "RemoveContainer" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" Feb 02 07:09:49 crc kubenswrapper[4842]: E0202 07:09:49.151820 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398\": container with ID starting with c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398 not found: ID does not exist" containerID="c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.151864 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398"} err="failed to get container status \"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398\": rpc error: code = NotFound desc = could not find container \"c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398\": container with ID starting with c802fa3028f8b2c2c2cefe528fbbb11245e3ea35edbed19c7f9407c4edba1398 not found: ID does not exist" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157417 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157536 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157586 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157617 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.157640 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") pod \"748756c2-ee60-42ce-835e-bfaa7007d7ac\" (UID: \"748756c2-ee60-42ce-835e-bfaa7007d7ac\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.158372 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs" (OuterVolumeSpecName: "logs") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.162400 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb" (OuterVolumeSpecName: "kube-api-access-kkhbb") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "kube-api-access-kkhbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.176248 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.187955 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.209102 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data" (OuterVolumeSpecName: "config-data") pod "748756c2-ee60-42ce-835e-bfaa7007d7ac" (UID: "748756c2-ee60-42ce-835e-bfaa7007d7ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.229610 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265206 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265248 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/748756c2-ee60-42ce-835e-bfaa7007d7ac-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265258 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkhbb\" (UniqueName: \"kubernetes.io/projected/748756c2-ee60-42ce-835e-bfaa7007d7ac-kube-api-access-kkhbb\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265268 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.265276 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/748756c2-ee60-42ce-835e-bfaa7007d7ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365737 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365820 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365886 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.365981 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.366060 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") pod \"f3d6691d-0283-4dd7-966d-ceba8bde7895\" (UID: \"f3d6691d-0283-4dd7-966d-ceba8bde7895\") " Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.366673 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs" (OuterVolumeSpecName: "logs") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.369124 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt" (OuterVolumeSpecName: "kube-api-access-xdbkt") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "kube-api-access-xdbkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.370062 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.394437 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.415749 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data" (OuterVolumeSpecName: "config-data") pod "f3d6691d-0283-4dd7-966d-ceba8bde7895" (UID: "f3d6691d-0283-4dd7-966d-ceba8bde7895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.461211 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" path="/var/lib/kubelet/pods/928a8c7e-d835-4795-8197-1861e4fd8f83/volumes" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.465067 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" path="/var/lib/kubelet/pods/ce6d1a00-c27b-418e-afa9-01c8c7802127/volumes" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.466192 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.466312 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-687b99dfd8-skrq6"] Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467079 4842 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467111 4842 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467127 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdbkt\" (UniqueName: \"kubernetes.io/projected/f3d6691d-0283-4dd7-966d-ceba8bde7895-kube-api-access-xdbkt\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467143 4842 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d6691d-0283-4dd7-966d-ceba8bde7895-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:49 crc kubenswrapper[4842]: I0202 07:09:49.467158 4842 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6691d-0283-4dd7-966d-ceba8bde7895-logs\") on node \"crc\" DevicePath \"\"" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.129016 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" event={"ID":"f3d6691d-0283-4dd7-966d-ceba8bde7895","Type":"ContainerDied","Data":"d69c45eb45e674be84418f12982b88cbb7cb13f89d733e29e26157326878116c"} Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.129034 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cf958d9d9-vvzkc" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.129585 4842 scope.go:117] "RemoveContainer" containerID="dac9b206e4e1335054c8c15fe13fa2bcf140fe9dec688f671a0584f1e29286b6" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.159624 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.169045 4842 scope.go:117] "RemoveContainer" containerID="04882b818d128bc118fdd65d9db4d076517b460bcb504e4f555e0244313167cc" Feb 02 07:09:50 crc kubenswrapper[4842]: I0202 07:09:50.169347 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-5cf958d9d9-vvzkc"] Feb 02 07:09:51 crc kubenswrapper[4842]: I0202 07:09:51.452248 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" path="/var/lib/kubelet/pods/748756c2-ee60-42ce-835e-bfaa7007d7ac/volumes" Feb 02 07:09:51 crc kubenswrapper[4842]: I0202 07:09:51.453771 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" path="/var/lib/kubelet/pods/f3d6691d-0283-4dd7-966d-ceba8bde7895/volumes" Feb 02 07:10:12 crc kubenswrapper[4842]: I0202 07:10:12.146475 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:10:12 crc kubenswrapper[4842]: I0202 07:10:12.147099 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.146827 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.149167 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.149432 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.150673 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.151020 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" gracePeriod=600 Feb 02 07:10:42 crc kubenswrapper[4842]: E0202 07:10:42.283497 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.731497 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" exitCode=0 Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.731587 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87"} Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.732189 4842 scope.go:117] "RemoveContainer" containerID="edc46ebafd92ce96bdf7451703c0e2c7fef67799fb2195e0085383b856862c49" Feb 02 07:10:42 crc kubenswrapper[4842]: I0202 07:10:42.733559 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:10:42 crc kubenswrapper[4842]: E0202 07:10:42.735448 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:10:57 crc kubenswrapper[4842]: I0202 07:10:57.433028 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:10:57 crc kubenswrapper[4842]: E0202 07:10:57.434157 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:08 crc kubenswrapper[4842]: I0202 07:11:08.433910 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:08 crc kubenswrapper[4842]: E0202 07:11:08.435028 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.484054 4842 scope.go:117] "RemoveContainer" containerID="59526756b474c2762ebc0f7a6578c91c40cc272db00fa72f3384382706ed53e2" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.532197 4842 scope.go:117] "RemoveContainer" containerID="185ab6e958e5fc2a5da9e833e3789438b8d16f440f7c53e0467e8ff307a5f7c8" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.579858 4842 scope.go:117] "RemoveContainer" containerID="f28dfbf8c174cb46df97e4d7d6b844e785a2d8671506e1ebb71b67017e08a6b8" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.641141 4842 scope.go:117] "RemoveContainer" containerID="1f6dfdf20fb08a168081a064432d989dfc5b7013b8511778f8a6195c000accc0" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.667786 4842 scope.go:117] "RemoveContainer" containerID="326e1290c30749283ca2bf9608aa395736ad83c0971c17e5e2948a81ffff16c0" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.697793 4842 scope.go:117] "RemoveContainer" containerID="5a4746c338d6ea60edc25a0f516095639bc028a5f96d859500d9f30d568afd7f" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.731914 4842 scope.go:117] "RemoveContainer" containerID="fd930d739c77e2c60500ea7cab9f16a6ba8a914130efb858b41ff112a5549c6c" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.757006 4842 scope.go:117] "RemoveContainer" containerID="d406c8dd7aa9d060cb8c2e933af0916fc03ef6a4df86a58d035643deda1d435e" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.783612 4842 scope.go:117] "RemoveContainer" containerID="2b38ab8a50c4bfdef3036052e4dbdb50598c007951f872fa5af56a866e47db58" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.809892 4842 scope.go:117] "RemoveContainer" containerID="d8fe329dd4b6d5e2f6afa45efa10d42b7ad946aa8ec1ea8a45b86570356f4bd0" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.839491 4842 scope.go:117] "RemoveContainer" containerID="17bb3eec7905f7b5df5e9c3137f1a5db8fc820e99f038ef4113064b8ca0bb24d" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.910509 4842 scope.go:117] "RemoveContainer" containerID="baa67ddc95fed558f7c865e018c407b7a90c8fd196753967451af639f1b0851e" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.934565 4842 scope.go:117] "RemoveContainer" containerID="95018804c3eeb98d3bc4dd01533eb47f23f9335fb411951096ec1c046e6c00c4" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.962037 4842 scope.go:117] "RemoveContainer" containerID="a5e957fb74580066bf78b8278f65ee1b3e13330434bca538903d73afe512a090" Feb 02 07:11:18 crc kubenswrapper[4842]: I0202 07:11:18.987506 4842 scope.go:117] "RemoveContainer" containerID="be09858b0b26720a1b1eb72e60d3de0b3dbd4ce4a7e6fc548a4d5f3d171165c8" Feb 02 07:11:19 crc kubenswrapper[4842]: I0202 07:11:19.023210 4842 scope.go:117] "RemoveContainer" containerID="1fdc53d1e29c1c53121cfb56667f86dc9ccc9f8da8c68e110eaaab428c59853f" Feb 02 07:11:19 crc kubenswrapper[4842]: I0202 07:11:19.053726 4842 scope.go:117] "RemoveContainer" containerID="8450cdf340185e60d5f4db9ea47d0c0bf9eae39c09e5f2b6a32cf93eac9395f1" Feb 02 07:11:19 crc kubenswrapper[4842]: I0202 07:11:19.082027 4842 scope.go:117] "RemoveContainer" containerID="af9aab2a24cfc4f124984122e483edf359b136da9788f63d0af01da2b636aa44" Feb 02 07:11:23 crc kubenswrapper[4842]: I0202 07:11:23.434042 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:23 crc kubenswrapper[4842]: E0202 07:11:23.434917 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:34 crc kubenswrapper[4842]: I0202 07:11:34.433825 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:34 crc kubenswrapper[4842]: E0202 07:11:34.435368 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:45 crc kubenswrapper[4842]: I0202 07:11:45.440459 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:45 crc kubenswrapper[4842]: E0202 07:11:45.441204 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:11:56 crc kubenswrapper[4842]: I0202 07:11:56.433497 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:11:56 crc kubenswrapper[4842]: E0202 07:11:56.434432 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:09 crc kubenswrapper[4842]: I0202 07:12:09.434593 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:09 crc kubenswrapper[4842]: E0202 07:12:09.435779 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.497932 4842 scope.go:117] "RemoveContainer" containerID="39eb208f6af2deea706cedebd930cca14ea7a25cb9ca73a57ad9dc64e6023a18" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.560588 4842 scope.go:117] "RemoveContainer" containerID="e6c087a85acb8c56b9934f5572a1bcc68f491cf79f0f8b755c20d672d211503e" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.609352 4842 scope.go:117] "RemoveContainer" containerID="9a34bab1d66516a5177aafc62bed955fa80608af2d16da47596a9168353c819f" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.653135 4842 scope.go:117] "RemoveContainer" containerID="7195db1dd98fa99bf79467abe2ecc6133db9df280df7df78ae67b06d2ce5fe42" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.713045 4842 scope.go:117] "RemoveContainer" containerID="d6ab707ecf1e978e711e1ac029ea3186750e3b41e200559f065ad3d1d57c4081" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.751481 4842 scope.go:117] "RemoveContainer" containerID="d4afe8e323946b2a091c267fa1099076188f1ad9d2a9b63f7930456fb99f3d8f" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.777482 4842 scope.go:117] "RemoveContainer" containerID="c1cc1b81874f37b6dd69a794f4c89e58f1e938624f539804095c18ceb3989c67" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.811299 4842 scope.go:117] "RemoveContainer" containerID="5828541a319e15b9a24397a64ce914d508fb08442c48731c2790845a873ff2cb" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.842833 4842 scope.go:117] "RemoveContainer" containerID="6586c2e8f7af2e360086efaa4a8a6c6f2493d034bdc7ef3f3fa3fe1325d17da7" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.869548 4842 scope.go:117] "RemoveContainer" containerID="83c2404b835485135c772ac74f310b1761d22ef1f63c10393be3a87c53fc66aa" Feb 02 07:12:19 crc kubenswrapper[4842]: I0202 07:12:19.894570 4842 scope.go:117] "RemoveContainer" containerID="c9da43fb971a5ef2a720b6588e511324cbe1b669ca26172de540c2c1051786f8" Feb 02 07:12:23 crc kubenswrapper[4842]: I0202 07:12:23.434605 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:23 crc kubenswrapper[4842]: E0202 07:12:23.435404 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.927712 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928394 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928417 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928443 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928469 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928497 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928512 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928543 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928559 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928582 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928596 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928764 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928780 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928798 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928814 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928847 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928858 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928873 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928888 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928917 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928933 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928955 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="mysql-bootstrap" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928969 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="mysql-bootstrap" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.928981 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.928995 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929027 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929044 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929065 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-content" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929081 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-content" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929105 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929121 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929146 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929162 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929192 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929208 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929271 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929289 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929319 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929335 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929354 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929370 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929402 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929418 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929451 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929467 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929482 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929497 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929516 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929532 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929556 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929571 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929588 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929604 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929635 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929653 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929668 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929683 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929702 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929718 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929745 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929761 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929786 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929803 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929820 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929836 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929862 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929878 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929898 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929913 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929944 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.929959 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.929990 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930006 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930031 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930048 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930066 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930081 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930098 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930114 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930133 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930149 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930170 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930187 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930249 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-utilities" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930266 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="extract-utilities" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930287 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930303 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930326 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930342 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930360 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930375 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930395 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930411 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930429 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930444 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930459 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930473 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930494 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930510 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930534 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server-init" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930550 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server-init" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930579 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930594 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930623 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930637 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930654 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930669 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930690 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930705 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930727 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930742 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930764 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930778 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="setup-container" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930798 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930813 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930839 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930854 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930876 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930892 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930917 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930933 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930954 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.930970 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.930991 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931006 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931025 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931041 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931059 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931076 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931103 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931118 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931135 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931151 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" Feb 02 07:12:24 crc kubenswrapper[4842]: E0202 07:12:24.931164 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931176 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931470 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931494 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931518 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931530 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931550 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f94c60e-a4fc-4b7d-96cd-367d46a731c4" containerName="nova-scheduler-scheduler" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931565 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="rsync" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931583 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931599 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931616 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931631 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931653 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbda1f81-b862-4ee7-84ce-590c353e4d5b" containerName="nova-cell0-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931670 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-expirer" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931686 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-metadata" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931708 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="swift-recon-cron" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931721 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4d672b-cb7a-406d-ab62-12745f300ef0" containerName="memcached" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931734 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931747 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="441d47f7-e5dd-456f-b6fa-10a642be6742" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931765 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4850512e-bbc8-468d-94ef-1d1be3b0b49c" containerName="nova-cell1-conductor-conductor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931786 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f0d774-dbe6-45d5-9ffa-64383c8be0d7" containerName="registry-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931802 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931815 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="ovn-northd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931869 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6064786a-fa53-47a7-88ee-384cf70a86c6" containerName="openstack-network-exporter" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931892 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931906 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovs-vswitchd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931919 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931936 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931950 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931972 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-reaper" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.931991 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2ca532-dbbc-4148-8d2f-fc474685f0bd" containerName="rabbitmq" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932010 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932028 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7343dd67-a085-4da9-8d79-f25ea1e20ca6" containerName="keystone-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932042 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f55116-a518-4f21-8816-6f8232a6f68d" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932060 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b11cfdf-ed7a-48ce-97eb-e03cd6be314c" containerName="kube-state-metrics" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932080 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56025ce-3772-435d-bdba-a4d1ba9d6e2f" containerName="placement-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932093 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="900b2d20-01c8-47e0-8271-ccfd8549d468" containerName="cinder-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932110 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00b7c2b-79ea-4cd1-80c3-f74f7e398ffd" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932126 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="709c39fb-802f-4690-89f6-41a717e7244c" containerName="galera" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932142 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="54aa018a-3e7e-4c95-9c1d-387543ed5af0" containerName="nova-metadata-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932161 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932174 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932190 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932204 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932245 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932261 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6d1a00-c27b-418e-afa9-01c8c7802127" containerName="ovsdb-server" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932277 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932291 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="account-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932304 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="748756c2-ee60-42ce-835e-bfaa7007d7ac" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932323 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-central-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932345 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="679e6e39-029a-452e-a375-bf0b937e3fbe" containerName="barbican-keystone-listener-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932362 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb022115-b53a-4ed0-a2a0-b44644dc26a7" containerName="barbican-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932379 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d6691d-0283-4dd7-966d-ceba8bde7895" containerName="barbican-worker-log" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932397 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="sg-core" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932416 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c96a7e1-78c3-449d-9200-735db4ee7086" containerName="glance-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932430 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="25609b1c-e1e9-4633-b3e3-93bd2f4396de" containerName="nova-api-api" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932445 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="174fcd53-40ab-4d19-a317-bc5cd117d2a4" containerName="ceilometer-notification-agent" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932464 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="object-updater" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932480 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-auditor" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932494 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="928a8c7e-d835-4795-8197-1861e4fd8f83" containerName="container-replicator" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932510 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eff2351-b4e8-43cf-a232-9c36cb11c130" containerName="proxy-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932522 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="953bf671-ca79-4208-9bab-672dc079dd82" containerName="neutron-httpd" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.932986 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="b912e45d-72e7-4250-9757-add1efcfb054" containerName="mariadb-account-create-update" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.934259 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:24 crc kubenswrapper[4842]: I0202 07:12:24.965117 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.126080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.126255 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.126293 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.227300 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.227401 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.227463 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.228022 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.228033 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.245144 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"redhat-marketplace-4s2s4\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.264417 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.748197 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:25 crc kubenswrapper[4842]: I0202 07:12:25.913817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerStarted","Data":"24d91f3012e33754aacb4102942da6f61dfa4b5e76f13f807231a7a0da746b65"} Feb 02 07:12:26 crc kubenswrapper[4842]: I0202 07:12:26.924889 4842 generic.go:334] "Generic (PLEG): container finished" podID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" exitCode=0 Feb 02 07:12:26 crc kubenswrapper[4842]: I0202 07:12:26.924992 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd"} Feb 02 07:12:27 crc kubenswrapper[4842]: I0202 07:12:27.939825 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerStarted","Data":"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b"} Feb 02 07:12:28 crc kubenswrapper[4842]: I0202 07:12:28.953400 4842 generic.go:334] "Generic (PLEG): container finished" podID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" exitCode=0 Feb 02 07:12:28 crc kubenswrapper[4842]: I0202 07:12:28.953475 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b"} Feb 02 07:12:29 crc kubenswrapper[4842]: I0202 07:12:29.968081 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerStarted","Data":"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173"} Feb 02 07:12:29 crc kubenswrapper[4842]: I0202 07:12:29.996331 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4s2s4" podStartSLOduration=3.590215585 podStartE2EDuration="5.996314873s" podCreationTimestamp="2026-02-02 07:12:24 +0000 UTC" firstStartedPulling="2026-02-02 07:12:26.927333543 +0000 UTC m=+1572.304601455" lastFinishedPulling="2026-02-02 07:12:29.333432781 +0000 UTC m=+1574.710700743" observedRunningTime="2026-02-02 07:12:29.994493288 +0000 UTC m=+1575.371761240" watchObservedRunningTime="2026-02-02 07:12:29.996314873 +0000 UTC m=+1575.373582795" Feb 02 07:12:35 crc kubenswrapper[4842]: I0202 07:12:35.265040 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:35 crc kubenswrapper[4842]: I0202 07:12:35.266819 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:35 crc kubenswrapper[4842]: I0202 07:12:35.329578 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:36 crc kubenswrapper[4842]: I0202 07:12:36.108010 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:36 crc kubenswrapper[4842]: I0202 07:12:36.169154 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.042898 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4s2s4" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" containerID="cri-o://db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" gracePeriod=2 Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.434432 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:38 crc kubenswrapper[4842]: E0202 07:12:38.435119 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.567837 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.666919 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") pod \"99f8d884-14b5-451d-9fdc-fc33e7615919\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.667019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") pod \"99f8d884-14b5-451d-9fdc-fc33e7615919\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.667070 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") pod \"99f8d884-14b5-451d-9fdc-fc33e7615919\" (UID: \"99f8d884-14b5-451d-9fdc-fc33e7615919\") " Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.668802 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities" (OuterVolumeSpecName: "utilities") pod "99f8d884-14b5-451d-9fdc-fc33e7615919" (UID: "99f8d884-14b5-451d-9fdc-fc33e7615919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.674131 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4" (OuterVolumeSpecName: "kube-api-access-9bcr4") pod "99f8d884-14b5-451d-9fdc-fc33e7615919" (UID: "99f8d884-14b5-451d-9fdc-fc33e7615919"). InnerVolumeSpecName "kube-api-access-9bcr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.701732 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99f8d884-14b5-451d-9fdc-fc33e7615919" (UID: "99f8d884-14b5-451d-9fdc-fc33e7615919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.768782 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.768819 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bcr4\" (UniqueName: \"kubernetes.io/projected/99f8d884-14b5-451d-9fdc-fc33e7615919-kube-api-access-9bcr4\") on node \"crc\" DevicePath \"\"" Feb 02 07:12:38 crc kubenswrapper[4842]: I0202 07:12:38.768850 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99f8d884-14b5-451d-9fdc-fc33e7615919-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058322 4842 generic.go:334] "Generic (PLEG): container finished" podID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" exitCode=0 Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058371 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173"} Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058445 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4s2s4" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058473 4842 scope.go:117] "RemoveContainer" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.058450 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4s2s4" event={"ID":"99f8d884-14b5-451d-9fdc-fc33e7615919","Type":"ContainerDied","Data":"24d91f3012e33754aacb4102942da6f61dfa4b5e76f13f807231a7a0da746b65"} Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.100479 4842 scope.go:117] "RemoveContainer" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.121303 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.127802 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4s2s4"] Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.130296 4842 scope.go:117] "RemoveContainer" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.177579 4842 scope.go:117] "RemoveContainer" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" Feb 02 07:12:39 crc kubenswrapper[4842]: E0202 07:12:39.178013 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173\": container with ID starting with db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173 not found: ID does not exist" containerID="db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178052 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173"} err="failed to get container status \"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173\": rpc error: code = NotFound desc = could not find container \"db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173\": container with ID starting with db5a266381872b2d9b47a4edd02f653cfac12b456b45fea6401c1cbadafe2173 not found: ID does not exist" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178080 4842 scope.go:117] "RemoveContainer" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" Feb 02 07:12:39 crc kubenswrapper[4842]: E0202 07:12:39.178638 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b\": container with ID starting with e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b not found: ID does not exist" containerID="e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178664 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b"} err="failed to get container status \"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b\": rpc error: code = NotFound desc = could not find container \"e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b\": container with ID starting with e46d814da721b9a886afe1e704d19f4d623a07fc712f86204a903efb81cb3a5b not found: ID does not exist" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.178677 4842 scope.go:117] "RemoveContainer" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" Feb 02 07:12:39 crc kubenswrapper[4842]: E0202 07:12:39.179018 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd\": container with ID starting with 6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd not found: ID does not exist" containerID="6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.179067 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd"} err="failed to get container status \"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd\": rpc error: code = NotFound desc = could not find container \"6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd\": container with ID starting with 6e7f3dc221760300eb89a57893eb25784296cb5d5a4ffe41eda08502ffed75bd not found: ID does not exist" Feb 02 07:12:39 crc kubenswrapper[4842]: I0202 07:12:39.449545 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" path="/var/lib/kubelet/pods/99f8d884-14b5-451d-9fdc-fc33e7615919/volumes" Feb 02 07:12:49 crc kubenswrapper[4842]: I0202 07:12:49.434321 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:12:49 crc kubenswrapper[4842]: E0202 07:12:49.435411 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:00 crc kubenswrapper[4842]: I0202 07:13:00.433838 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:00 crc kubenswrapper[4842]: E0202 07:13:00.434962 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.276326 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:01 crc kubenswrapper[4842]: E0202 07:13:01.277161 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277198 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" Feb 02 07:13:01 crc kubenswrapper[4842]: E0202 07:13:01.277264 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-utilities" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277275 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-utilities" Feb 02 07:13:01 crc kubenswrapper[4842]: E0202 07:13:01.277293 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-content" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="extract-content" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.277740 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f8d884-14b5-451d-9fdc-fc33e7615919" containerName="registry-server" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.279930 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.290962 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.456334 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.456772 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.456813 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.557887 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.557937 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.557976 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.558454 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.558811 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.576806 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"community-operators-pqgtv\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:01 crc kubenswrapper[4842]: I0202 07:13:01.614096 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:02 crc kubenswrapper[4842]: I0202 07:13:02.079923 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:02 crc kubenswrapper[4842]: W0202 07:13:02.082492 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice/crio-f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15 WatchSource:0}: Error finding container f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15: Status 404 returned error can't find the container with id f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15 Feb 02 07:13:02 crc kubenswrapper[4842]: I0202 07:13:02.296840 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e"} Feb 02 07:13:02 crc kubenswrapper[4842]: I0202 07:13:02.296901 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15"} Feb 02 07:13:03 crc kubenswrapper[4842]: I0202 07:13:03.311949 4842 generic.go:334] "Generic (PLEG): container finished" podID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" exitCode=0 Feb 02 07:13:03 crc kubenswrapper[4842]: I0202 07:13:03.312006 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e"} Feb 02 07:13:03 crc kubenswrapper[4842]: I0202 07:13:03.314262 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb"} Feb 02 07:13:04 crc kubenswrapper[4842]: I0202 07:13:04.327953 4842 generic.go:334] "Generic (PLEG): container finished" podID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" exitCode=0 Feb 02 07:13:04 crc kubenswrapper[4842]: I0202 07:13:04.328024 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb"} Feb 02 07:13:05 crc kubenswrapper[4842]: I0202 07:13:05.341275 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerStarted","Data":"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f"} Feb 02 07:13:05 crc kubenswrapper[4842]: I0202 07:13:05.371462 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pqgtv" podStartSLOduration=1.936444275 podStartE2EDuration="4.371431339s" podCreationTimestamp="2026-02-02 07:13:01 +0000 UTC" firstStartedPulling="2026-02-02 07:13:02.299364693 +0000 UTC m=+1607.676632645" lastFinishedPulling="2026-02-02 07:13:04.734351787 +0000 UTC m=+1610.111619709" observedRunningTime="2026-02-02 07:13:05.367524673 +0000 UTC m=+1610.744792655" watchObservedRunningTime="2026-02-02 07:13:05.371431339 +0000 UTC m=+1610.748699291" Feb 02 07:13:11 crc kubenswrapper[4842]: I0202 07:13:11.614786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:11 crc kubenswrapper[4842]: I0202 07:13:11.615641 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:11 crc kubenswrapper[4842]: I0202 07:13:11.690508 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:12 crc kubenswrapper[4842]: I0202 07:13:12.434283 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:12 crc kubenswrapper[4842]: E0202 07:13:12.434727 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:12 crc kubenswrapper[4842]: I0202 07:13:12.488256 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:12 crc kubenswrapper[4842]: I0202 07:13:12.565343 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:14 crc kubenswrapper[4842]: I0202 07:13:14.432363 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pqgtv" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" containerID="cri-o://f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" gracePeriod=2 Feb 02 07:13:14 crc kubenswrapper[4842]: I0202 07:13:14.969069 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.082546 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") pod \"a8eb678e-c4b4-4c94-ad98-b3327276614e\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.082635 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") pod \"a8eb678e-c4b4-4c94-ad98-b3327276614e\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.082798 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") pod \"a8eb678e-c4b4-4c94-ad98-b3327276614e\" (UID: \"a8eb678e-c4b4-4c94-ad98-b3327276614e\") " Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.083866 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities" (OuterVolumeSpecName: "utilities") pod "a8eb678e-c4b4-4c94-ad98-b3327276614e" (UID: "a8eb678e-c4b4-4c94-ad98-b3327276614e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.091091 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv" (OuterVolumeSpecName: "kube-api-access-ss4hv") pod "a8eb678e-c4b4-4c94-ad98-b3327276614e" (UID: "a8eb678e-c4b4-4c94-ad98-b3327276614e"). InnerVolumeSpecName "kube-api-access-ss4hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.159554 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8eb678e-c4b4-4c94-ad98-b3327276614e" (UID: "a8eb678e-c4b4-4c94-ad98-b3327276614e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.185025 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.185241 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb678e-c4b4-4c94-ad98-b3327276614e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.185353 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss4hv\" (UniqueName: \"kubernetes.io/projected/a8eb678e-c4b4-4c94-ad98-b3327276614e-kube-api-access-ss4hv\") on node \"crc\" DevicePath \"\"" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.445095 4842 generic.go:334] "Generic (PLEG): container finished" podID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" exitCode=0 Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.445318 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pqgtv" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.453714 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f"} Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.454322 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pqgtv" event={"ID":"a8eb678e-c4b4-4c94-ad98-b3327276614e","Type":"ContainerDied","Data":"f8c9b8444d00675f23f6d9dd71b5ba964158a6776857772c61254d542aa6af15"} Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.454345 4842 scope.go:117] "RemoveContainer" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.497784 4842 scope.go:117] "RemoveContainer" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.505715 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.514674 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pqgtv"] Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.534795 4842 scope.go:117] "RemoveContainer" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.564707 4842 scope.go:117] "RemoveContainer" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" Feb 02 07:13:15 crc kubenswrapper[4842]: E0202 07:13:15.565173 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f\": container with ID starting with f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f not found: ID does not exist" containerID="f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.565313 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f"} err="failed to get container status \"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f\": rpc error: code = NotFound desc = could not find container \"f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f\": container with ID starting with f5604ba2068b2ea28e132ae4f7ec4f98ae0a5739b41c47ef7c4c3eb8e2c5eb8f not found: ID does not exist" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.565354 4842 scope.go:117] "RemoveContainer" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" Feb 02 07:13:15 crc kubenswrapper[4842]: E0202 07:13:15.566124 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb\": container with ID starting with d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb not found: ID does not exist" containerID="d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.566194 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb"} err="failed to get container status \"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb\": rpc error: code = NotFound desc = could not find container \"d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb\": container with ID starting with d0ea6803f226d8f2249251471f407ed70ffa7b8703286ab085b6aa52044d42eb not found: ID does not exist" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.566281 4842 scope.go:117] "RemoveContainer" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" Feb 02 07:13:15 crc kubenswrapper[4842]: E0202 07:13:15.567181 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e\": container with ID starting with 25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e not found: ID does not exist" containerID="25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e" Feb 02 07:13:15 crc kubenswrapper[4842]: I0202 07:13:15.567256 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e"} err="failed to get container status \"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e\": rpc error: code = NotFound desc = could not find container \"25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e\": container with ID starting with 25cdab15747e575edf63cc27f41f20f404ae3e0d124509a049a546fd072db81e not found: ID does not exist" Feb 02 07:13:16 crc kubenswrapper[4842]: E0202 07:13:16.337514 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:17 crc kubenswrapper[4842]: I0202 07:13:17.471453 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" path="/var/lib/kubelet/pods/a8eb678e-c4b4-4c94-ad98-b3327276614e/volumes" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.073554 4842 scope.go:117] "RemoveContainer" containerID="f5f4ebc4957f3bd8515b3e4a7d7bf4b7c05ae94bf9d531ffc8914bcdc9bde611" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.135498 4842 scope.go:117] "RemoveContainer" containerID="7b7d5e5edb2af232c2055e5da49c69d329f4113726a849604a2b594aefa2f3af" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.161120 4842 scope.go:117] "RemoveContainer" containerID="b2f7cb4727d9784f10ff6a0c8a30a31bb44be887023eca0a860978903f19daa6" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.197190 4842 scope.go:117] "RemoveContainer" containerID="36b2b05bbe375b399c98b67e29fc0579c7a94211ddd64f7ddba9592374c382bd" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.226413 4842 scope.go:117] "RemoveContainer" containerID="23dd0ca466edc848ab9f75914f169da25ba7c3c7918e89f13ac53448e128d009" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.249693 4842 scope.go:117] "RemoveContainer" containerID="e8f9c804c29efb0cbd22bbe4d584e668c739a0efdfc614e0546bb32ea70ef867" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.274197 4842 scope.go:117] "RemoveContainer" containerID="022aa50ba41d0a413d49d7816b95c9ce705b40b44d3e4b26928051ada603decd" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.303279 4842 scope.go:117] "RemoveContainer" containerID="c593d09b2735487782551786767a4ed77fad095c2d0a78c5ed62f1b78de5ce7e" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.321999 4842 scope.go:117] "RemoveContainer" containerID="3a5cb3f49b99abe6192e05d777a57a2ec064de70a666aa2c8b933349f5030599" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.341901 4842 scope.go:117] "RemoveContainer" containerID="2c7088cf1821b77c6f7eefcfe1152002a124d024b112d220292c3bfdaf924d4c" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.366148 4842 scope.go:117] "RemoveContainer" containerID="adafd15daec92386baa24cf42bc0363f97b26ac9307e8e8272e537e2c7e8b2cf" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.424836 4842 scope.go:117] "RemoveContainer" containerID="72e60f391adc327a7666947b2251ee7da0c5b5a42927991c1ba5e739d160e596" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.449606 4842 scope.go:117] "RemoveContainer" containerID="50694d5591176c65770672c30837d60f3438d04ee3ca91b5bc53b0366f9835df" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.511316 4842 scope.go:117] "RemoveContainer" containerID="ca50f3bd514767840a56ccfe9f58d3e7f3e73682b97d7191a9419836cd607b01" Feb 02 07:13:20 crc kubenswrapper[4842]: I0202 07:13:20.539348 4842 scope.go:117] "RemoveContainer" containerID="baeb51b0b4bb9444bd98551a3cc3dcb68f182ab93c0b62223c4c0a0707790ceb" Feb 02 07:13:25 crc kubenswrapper[4842]: I0202 07:13:25.440898 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:25 crc kubenswrapper[4842]: E0202 07:13:25.441722 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:26 crc kubenswrapper[4842]: E0202 07:13:26.521774 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:36 crc kubenswrapper[4842]: E0202 07:13:36.707451 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:38 crc kubenswrapper[4842]: I0202 07:13:38.433719 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:38 crc kubenswrapper[4842]: E0202 07:13:38.434517 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:46 crc kubenswrapper[4842]: E0202 07:13:46.901110 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:13:49 crc kubenswrapper[4842]: I0202 07:13:49.434102 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:13:49 crc kubenswrapper[4842]: E0202 07:13:49.436069 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:13:57 crc kubenswrapper[4842]: E0202 07:13:57.126553 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:14:04 crc kubenswrapper[4842]: I0202 07:14:04.433904 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:04 crc kubenswrapper[4842]: E0202 07:14:04.435983 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:07 crc kubenswrapper[4842]: E0202 07:14:07.353494 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8eb678e_c4b4_4c94_ad98_b3327276614e.slice\": RecentStats: unable to find data in memory cache]" Feb 02 07:14:15 crc kubenswrapper[4842]: I0202 07:14:15.437805 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:15 crc kubenswrapper[4842]: E0202 07:14:15.438592 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.780980 4842 scope.go:117] "RemoveContainer" containerID="a176e8b4ea564bc302309fcba58a47b8e68f174edeb83a184476a852cc3c272e" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.812609 4842 scope.go:117] "RemoveContainer" containerID="55d824abd1b5b048d587e61fdc8db2106087cb9113bf5c22c3cc72f341861791" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.890060 4842 scope.go:117] "RemoveContainer" containerID="5f6dabb3b7c34feb5a2123ac9fa2eb87a3cf03a3caf3efd65fb72c179cb7cd52" Feb 02 07:14:20 crc kubenswrapper[4842]: I0202 07:14:20.906385 4842 scope.go:117] "RemoveContainer" containerID="2d911f330fb7cdc5064800cce65135b706e9f3cc93857bcb38ce5bd51f0bd398" Feb 02 07:14:28 crc kubenswrapper[4842]: I0202 07:14:28.434280 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:28 crc kubenswrapper[4842]: E0202 07:14:28.435532 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:40 crc kubenswrapper[4842]: I0202 07:14:40.434522 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:40 crc kubenswrapper[4842]: E0202 07:14:40.435837 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:14:53 crc kubenswrapper[4842]: I0202 07:14:53.852114 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:14:53 crc kubenswrapper[4842]: E0202 07:14:53.852961 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.165026 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 07:15:00 crc kubenswrapper[4842]: E0202 07:15:00.166284 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-content" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166314 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-content" Feb 02 07:15:00 crc kubenswrapper[4842]: E0202 07:15:00.166333 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166349 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" Feb 02 07:15:00 crc kubenswrapper[4842]: E0202 07:15:00.166382 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-utilities" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166401 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="extract-utilities" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.166710 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8eb678e-c4b4-4c94-ad98-b3327276614e" containerName="registry-server" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.167761 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.170460 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.171598 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.187609 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.224508 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.224558 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.224630 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.325661 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.325712 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.325771 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.326850 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.334655 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.356449 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"collect-profiles-29500275-ts5jb\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.529853 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:00 crc kubenswrapper[4842]: I0202 07:15:00.782821 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 07:15:01 crc kubenswrapper[4842]: I0202 07:15:01.468834 4842 generic.go:334] "Generic (PLEG): container finished" podID="94334935-cf80-444c-b508-8c45e9780eee" containerID="3ec04990d6c97adea2fe95dabf427fb8df7522b562c84dbbcac33e51d0d54b26" exitCode=0 Feb 02 07:15:01 crc kubenswrapper[4842]: I0202 07:15:01.469069 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" event={"ID":"94334935-cf80-444c-b508-8c45e9780eee","Type":"ContainerDied","Data":"3ec04990d6c97adea2fe95dabf427fb8df7522b562c84dbbcac33e51d0d54b26"} Feb 02 07:15:01 crc kubenswrapper[4842]: I0202 07:15:01.469253 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" event={"ID":"94334935-cf80-444c-b508-8c45e9780eee","Type":"ContainerStarted","Data":"ffd9b2a09b1899cc128dde5a3fdc164f53315d8e11ae540afa00b51d8d3daceb"} Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.790419 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.871456 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") pod \"94334935-cf80-444c-b508-8c45e9780eee\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.871612 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") pod \"94334935-cf80-444c-b508-8c45e9780eee\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.871663 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") pod \"94334935-cf80-444c-b508-8c45e9780eee\" (UID: \"94334935-cf80-444c-b508-8c45e9780eee\") " Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.874254 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume" (OuterVolumeSpecName: "config-volume") pod "94334935-cf80-444c-b508-8c45e9780eee" (UID: "94334935-cf80-444c-b508-8c45e9780eee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.877356 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "94334935-cf80-444c-b508-8c45e9780eee" (UID: "94334935-cf80-444c-b508-8c45e9780eee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.878388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd" (OuterVolumeSpecName: "kube-api-access-rhnnd") pod "94334935-cf80-444c-b508-8c45e9780eee" (UID: "94334935-cf80-444c-b508-8c45e9780eee"). InnerVolumeSpecName "kube-api-access-rhnnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.973503 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94334935-cf80-444c-b508-8c45e9780eee-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.973564 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhnnd\" (UniqueName: \"kubernetes.io/projected/94334935-cf80-444c-b508-8c45e9780eee-kube-api-access-rhnnd\") on node \"crc\" DevicePath \"\"" Feb 02 07:15:02 crc kubenswrapper[4842]: I0202 07:15:02.973586 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/94334935-cf80-444c-b508-8c45e9780eee-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:15:03 crc kubenswrapper[4842]: I0202 07:15:03.505502 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" event={"ID":"94334935-cf80-444c-b508-8c45e9780eee","Type":"ContainerDied","Data":"ffd9b2a09b1899cc128dde5a3fdc164f53315d8e11ae540afa00b51d8d3daceb"} Feb 02 07:15:03 crc kubenswrapper[4842]: I0202 07:15:03.505572 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffd9b2a09b1899cc128dde5a3fdc164f53315d8e11ae540afa00b51d8d3daceb" Feb 02 07:15:03 crc kubenswrapper[4842]: I0202 07:15:03.505920 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb" Feb 02 07:15:04 crc kubenswrapper[4842]: I0202 07:15:04.434261 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:04 crc kubenswrapper[4842]: E0202 07:15:04.436010 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:16 crc kubenswrapper[4842]: I0202 07:15:16.434532 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:16 crc kubenswrapper[4842]: E0202 07:15:16.436064 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:20 crc kubenswrapper[4842]: I0202 07:15:20.995933 4842 scope.go:117] "RemoveContainer" containerID="1f08602808f0c1da9b996db624f132bc20c5b91004db8c9c6f2ffa67741d3bbc" Feb 02 07:15:21 crc kubenswrapper[4842]: I0202 07:15:21.025892 4842 scope.go:117] "RemoveContainer" containerID="bebe8c74ad90a2dc028ad9e30942ced9f67c8af8df16026b5b89379d97e80e00" Feb 02 07:15:21 crc kubenswrapper[4842]: I0202 07:15:21.058897 4842 scope.go:117] "RemoveContainer" containerID="999eacbb47149d7ff50ad4df7698189fd41e6e1be3e25e8c83a58d8439abc53c" Feb 02 07:15:28 crc kubenswrapper[4842]: I0202 07:15:28.434321 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:28 crc kubenswrapper[4842]: E0202 07:15:28.435660 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:41 crc kubenswrapper[4842]: I0202 07:15:41.434155 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:41 crc kubenswrapper[4842]: E0202 07:15:41.435104 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:15:53 crc kubenswrapper[4842]: I0202 07:15:53.434721 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:15:53 crc kubenswrapper[4842]: I0202 07:15:53.972357 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1"} Feb 02 07:18:12 crc kubenswrapper[4842]: I0202 07:18:12.146517 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:18:12 crc kubenswrapper[4842]: I0202 07:18:12.147189 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:18:42 crc kubenswrapper[4842]: I0202 07:18:42.146692 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:18:42 crc kubenswrapper[4842]: I0202 07:18:42.147734 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.146705 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.148302 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.148457 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.149129 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.149320 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1" gracePeriod=600 Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.859809 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1" exitCode=0 Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.859905 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1"} Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.860404 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6"} Feb 02 07:19:12 crc kubenswrapper[4842]: I0202 07:19:12.860422 4842 scope.go:117] "RemoveContainer" containerID="fe7756a3802424ae4172016c8ad381cc916fff66b8224152f5f15fb732efae87" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.449764 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:19:53 crc kubenswrapper[4842]: E0202 07:19:53.451056 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94334935-cf80-444c-b508-8c45e9780eee" containerName="collect-profiles" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.451077 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="94334935-cf80-444c-b508-8c45e9780eee" containerName="collect-profiles" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.451337 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="94334935-cf80-444c-b508-8c45e9780eee" containerName="collect-profiles" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.452905 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.486141 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.565121 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.565181 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.565563 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666411 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666460 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666509 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.666951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.667017 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.693236 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"redhat-operators-6fs69\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:53 crc kubenswrapper[4842]: I0202 07:19:53.778449 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.023675 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.245325 4842 generic.go:334] "Generic (PLEG): container finished" podID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerID="54d29c0b963abf2e6cbe9930fdfb039211d0f6d3757608dff7e813a74402f5e9" exitCode=0 Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.245436 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"54d29c0b963abf2e6cbe9930fdfb039211d0f6d3757608dff7e813a74402f5e9"} Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.245724 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerStarted","Data":"65e76528169fb677cd540320202422b2280b76074014ea15d85e95ebce1f9e4b"} Feb 02 07:19:54 crc kubenswrapper[4842]: I0202 07:19:54.247279 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:19:56 crc kubenswrapper[4842]: I0202 07:19:56.268773 4842 generic.go:334] "Generic (PLEG): container finished" podID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerID="15961ca3966c5e19bf382f4ff38a45f3b4f496271c3a403b37983001d2953ade" exitCode=0 Feb 02 07:19:56 crc kubenswrapper[4842]: I0202 07:19:56.269179 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"15961ca3966c5e19bf382f4ff38a45f3b4f496271c3a403b37983001d2953ade"} Feb 02 07:19:57 crc kubenswrapper[4842]: I0202 07:19:57.287687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerStarted","Data":"26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26"} Feb 02 07:19:57 crc kubenswrapper[4842]: I0202 07:19:57.316617 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6fs69" podStartSLOduration=1.779264986 podStartE2EDuration="4.316593607s" podCreationTimestamp="2026-02-02 07:19:53 +0000 UTC" firstStartedPulling="2026-02-02 07:19:54.246942791 +0000 UTC m=+2019.624210713" lastFinishedPulling="2026-02-02 07:19:56.784271382 +0000 UTC m=+2022.161539334" observedRunningTime="2026-02-02 07:19:57.315572032 +0000 UTC m=+2022.692839954" watchObservedRunningTime="2026-02-02 07:19:57.316593607 +0000 UTC m=+2022.693861559" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.091400 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.094901 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.114896 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.197976 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.198061 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.198200 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299243 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299545 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299624 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.299798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.300043 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.322779 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"certified-operators-72zmj\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.416665 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.779017 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.779358 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.828474 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:03 crc kubenswrapper[4842]: I0202 07:20:03.855999 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.341185 4842 generic.go:334] "Generic (PLEG): container finished" podID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" exitCode=0 Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.341303 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d"} Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.341683 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerStarted","Data":"e260a9f7d75fb075cee2c831326054895c9434e215ba53e7ea6103746c73ba81"} Feb 02 07:20:04 crc kubenswrapper[4842]: I0202 07:20:04.416494 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:05 crc kubenswrapper[4842]: I0202 07:20:05.358484 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerStarted","Data":"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d"} Feb 02 07:20:05 crc kubenswrapper[4842]: I0202 07:20:05.480993 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:20:06 crc kubenswrapper[4842]: I0202 07:20:06.369311 4842 generic.go:334] "Generic (PLEG): container finished" podID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" exitCode=0 Feb 02 07:20:06 crc kubenswrapper[4842]: I0202 07:20:06.369573 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6fs69" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" containerID="cri-o://26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26" gracePeriod=2 Feb 02 07:20:06 crc kubenswrapper[4842]: I0202 07:20:06.370834 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.379037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerStarted","Data":"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.383934 4842 generic.go:334] "Generic (PLEG): container finished" podID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerID="26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26" exitCode=0 Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.384009 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.384040 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fs69" event={"ID":"ca00d8b2-3728-456f-bf49-285fb31385ef","Type":"ContainerDied","Data":"65e76528169fb677cd540320202422b2280b76074014ea15d85e95ebce1f9e4b"} Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.384081 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e76528169fb677cd540320202422b2280b76074014ea15d85e95ebce1f9e4b" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.405184 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-72zmj" podStartSLOduration=1.899990096 podStartE2EDuration="4.405165764s" podCreationTimestamp="2026-02-02 07:20:03 +0000 UTC" firstStartedPulling="2026-02-02 07:20:04.343553756 +0000 UTC m=+2029.720821708" lastFinishedPulling="2026-02-02 07:20:06.848729444 +0000 UTC m=+2032.225997376" observedRunningTime="2026-02-02 07:20:07.398478628 +0000 UTC m=+2032.775746550" watchObservedRunningTime="2026-02-02 07:20:07.405165764 +0000 UTC m=+2032.782433686" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.406929 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.568889 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") pod \"ca00d8b2-3728-456f-bf49-285fb31385ef\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.569062 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") pod \"ca00d8b2-3728-456f-bf49-285fb31385ef\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.569144 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") pod \"ca00d8b2-3728-456f-bf49-285fb31385ef\" (UID: \"ca00d8b2-3728-456f-bf49-285fb31385ef\") " Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.570284 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities" (OuterVolumeSpecName: "utilities") pod "ca00d8b2-3728-456f-bf49-285fb31385ef" (UID: "ca00d8b2-3728-456f-bf49-285fb31385ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.575542 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85" (OuterVolumeSpecName: "kube-api-access-n6v85") pod "ca00d8b2-3728-456f-bf49-285fb31385ef" (UID: "ca00d8b2-3728-456f-bf49-285fb31385ef"). InnerVolumeSpecName "kube-api-access-n6v85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.671706 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6v85\" (UniqueName: \"kubernetes.io/projected/ca00d8b2-3728-456f-bf49-285fb31385ef-kube-api-access-n6v85\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.672070 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.756296 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca00d8b2-3728-456f-bf49-285fb31385ef" (UID: "ca00d8b2-3728-456f-bf49-285fb31385ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:07 crc kubenswrapper[4842]: I0202 07:20:07.773551 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca00d8b2-3728-456f-bf49-285fb31385ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:08 crc kubenswrapper[4842]: I0202 07:20:08.393989 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fs69" Feb 02 07:20:08 crc kubenswrapper[4842]: I0202 07:20:08.448615 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:20:08 crc kubenswrapper[4842]: I0202 07:20:08.459008 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6fs69"] Feb 02 07:20:09 crc kubenswrapper[4842]: I0202 07:20:09.449364 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" path="/var/lib/kubelet/pods/ca00d8b2-3728-456f-bf49-285fb31385ef/volumes" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.417424 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.417848 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.498357 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.574669 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:13 crc kubenswrapper[4842]: I0202 07:20:13.749127 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:15 crc kubenswrapper[4842]: I0202 07:20:15.458625 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-72zmj" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" containerID="cri-o://05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" gracePeriod=2 Feb 02 07:20:15 crc kubenswrapper[4842]: I0202 07:20:15.967832 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.109371 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") pod \"453006f5-8304-47d9-b9d8-a4cc69692dcc\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.109568 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") pod \"453006f5-8304-47d9-b9d8-a4cc69692dcc\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.109664 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") pod \"453006f5-8304-47d9-b9d8-a4cc69692dcc\" (UID: \"453006f5-8304-47d9-b9d8-a4cc69692dcc\") " Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.111610 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities" (OuterVolumeSpecName: "utilities") pod "453006f5-8304-47d9-b9d8-a4cc69692dcc" (UID: "453006f5-8304-47d9-b9d8-a4cc69692dcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.123612 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb" (OuterVolumeSpecName: "kube-api-access-fw8gb") pod "453006f5-8304-47d9-b9d8-a4cc69692dcc" (UID: "453006f5-8304-47d9-b9d8-a4cc69692dcc"). InnerVolumeSpecName "kube-api-access-fw8gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.184478 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "453006f5-8304-47d9-b9d8-a4cc69692dcc" (UID: "453006f5-8304-47d9-b9d8-a4cc69692dcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.211148 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.211179 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw8gb\" (UniqueName: \"kubernetes.io/projected/453006f5-8304-47d9-b9d8-a4cc69692dcc-kube-api-access-fw8gb\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.211208 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/453006f5-8304-47d9-b9d8-a4cc69692dcc-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472263 4842 generic.go:334] "Generic (PLEG): container finished" podID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" exitCode=0 Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472347 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c"} Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472359 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-72zmj" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472411 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-72zmj" event={"ID":"453006f5-8304-47d9-b9d8-a4cc69692dcc","Type":"ContainerDied","Data":"e260a9f7d75fb075cee2c831326054895c9434e215ba53e7ea6103746c73ba81"} Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.472451 4842 scope.go:117] "RemoveContainer" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.508306 4842 scope.go:117] "RemoveContainer" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.509053 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.513935 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-72zmj"] Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.534881 4842 scope.go:117] "RemoveContainer" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.565836 4842 scope.go:117] "RemoveContainer" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" Feb 02 07:20:16 crc kubenswrapper[4842]: E0202 07:20:16.566539 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c\": container with ID starting with 05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c not found: ID does not exist" containerID="05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.566619 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c"} err="failed to get container status \"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c\": rpc error: code = NotFound desc = could not find container \"05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c\": container with ID starting with 05bbfa79e1de8510be3fed9eb02652d77961d7b89399c8f90bfd82bd6e6f6e1c not found: ID does not exist" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.566663 4842 scope.go:117] "RemoveContainer" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" Feb 02 07:20:16 crc kubenswrapper[4842]: E0202 07:20:16.567281 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d\": container with ID starting with a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d not found: ID does not exist" containerID="a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.567335 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d"} err="failed to get container status \"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d\": rpc error: code = NotFound desc = could not find container \"a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d\": container with ID starting with a30f59cb2cd06e957321cd08a9474ce66d57c65316dd219d721c8ca2a454864d not found: ID does not exist" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.567367 4842 scope.go:117] "RemoveContainer" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" Feb 02 07:20:16 crc kubenswrapper[4842]: E0202 07:20:16.568115 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d\": container with ID starting with d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d not found: ID does not exist" containerID="d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d" Feb 02 07:20:16 crc kubenswrapper[4842]: I0202 07:20:16.568176 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d"} err="failed to get container status \"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d\": rpc error: code = NotFound desc = could not find container \"d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d\": container with ID starting with d78e9f3b704db0554e9e8735957f8db808c99b02e8c5ba30de44d2ef460d9d6d not found: ID does not exist" Feb 02 07:20:17 crc kubenswrapper[4842]: I0202 07:20:17.452721 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" path="/var/lib/kubelet/pods/453006f5-8304-47d9-b9d8-a4cc69692dcc/volumes" Feb 02 07:21:12 crc kubenswrapper[4842]: I0202 07:21:12.146723 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:21:12 crc kubenswrapper[4842]: I0202 07:21:12.147427 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:21:42 crc kubenswrapper[4842]: I0202 07:21:42.146688 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:21:42 crc kubenswrapper[4842]: I0202 07:21:42.147554 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.145959 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.146870 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.146956 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.147904 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.148033 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" gracePeriod=600 Feb 02 07:22:12 crc kubenswrapper[4842]: E0202 07:22:12.302588 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.592455 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" exitCode=0 Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.592528 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6"} Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.592598 4842 scope.go:117] "RemoveContainer" containerID="ef3633cb81ad43f5900bb09958d1b9db8e2996aefec6cb08cbd8f8a8c4976bb1" Feb 02 07:22:12 crc kubenswrapper[4842]: I0202 07:22:12.593110 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:12 crc kubenswrapper[4842]: E0202 07:22:12.593487 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:25 crc kubenswrapper[4842]: I0202 07:22:25.441690 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:25 crc kubenswrapper[4842]: E0202 07:22:25.442997 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:37 crc kubenswrapper[4842]: I0202 07:22:37.435053 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:37 crc kubenswrapper[4842]: E0202 07:22:37.436143 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:52 crc kubenswrapper[4842]: I0202 07:22:52.434073 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:22:52 crc kubenswrapper[4842]: E0202 07:22:52.434751 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.722119 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723291 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723313 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723336 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723350 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723366 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723379 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723409 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723417 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723429 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723439 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="extract-utilities" Feb 02 07:22:59 crc kubenswrapper[4842]: E0202 07:22:59.723457 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723465 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="extract-content" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723658 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="453006f5-8304-47d9-b9d8-a4cc69692dcc" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.723682 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca00d8b2-3728-456f-bf49-285fb31385ef" containerName="registry-server" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.731171 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.733910 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.757100 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.757279 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.757361 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.857880 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.857972 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.858062 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.858783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.858802 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:22:59 crc kubenswrapper[4842]: I0202 07:22:59.879251 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"redhat-marketplace-5vntr\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:00 crc kubenswrapper[4842]: I0202 07:23:00.057961 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:00 crc kubenswrapper[4842]: I0202 07:23:00.529431 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:01 crc kubenswrapper[4842]: I0202 07:23:01.031688 4842 generic.go:334] "Generic (PLEG): container finished" podID="5dd671b4-cc04-4a87-a275-dea779856d29" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" exitCode=0 Feb 02 07:23:01 crc kubenswrapper[4842]: I0202 07:23:01.031862 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315"} Feb 02 07:23:01 crc kubenswrapper[4842]: I0202 07:23:01.032332 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerStarted","Data":"e5a81788552af8e52157e4072852e261209293db616808e28f3f1089dc73b9a0"} Feb 02 07:23:03 crc kubenswrapper[4842]: I0202 07:23:03.059697 4842 generic.go:334] "Generic (PLEG): container finished" podID="5dd671b4-cc04-4a87-a275-dea779856d29" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" exitCode=0 Feb 02 07:23:03 crc kubenswrapper[4842]: I0202 07:23:03.059791 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3"} Feb 02 07:23:04 crc kubenswrapper[4842]: I0202 07:23:04.072366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerStarted","Data":"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb"} Feb 02 07:23:04 crc kubenswrapper[4842]: I0202 07:23:04.106084 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5vntr" podStartSLOduration=2.576027165 podStartE2EDuration="5.106048639s" podCreationTimestamp="2026-02-02 07:22:59 +0000 UTC" firstStartedPulling="2026-02-02 07:23:01.035071522 +0000 UTC m=+2206.412339464" lastFinishedPulling="2026-02-02 07:23:03.565093026 +0000 UTC m=+2208.942360938" observedRunningTime="2026-02-02 07:23:04.100668536 +0000 UTC m=+2209.477936478" watchObservedRunningTime="2026-02-02 07:23:04.106048639 +0000 UTC m=+2209.483316581" Feb 02 07:23:07 crc kubenswrapper[4842]: I0202 07:23:07.433548 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:07 crc kubenswrapper[4842]: E0202 07:23:07.434147 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.058553 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.058654 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.129883 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.216584 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:10 crc kubenswrapper[4842]: I0202 07:23:10.379727 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.148303 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5vntr" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" containerID="cri-o://8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" gracePeriod=2 Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.606002 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.776639 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") pod \"5dd671b4-cc04-4a87-a275-dea779856d29\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.776711 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") pod \"5dd671b4-cc04-4a87-a275-dea779856d29\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.776756 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") pod \"5dd671b4-cc04-4a87-a275-dea779856d29\" (UID: \"5dd671b4-cc04-4a87-a275-dea779856d29\") " Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.778697 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities" (OuterVolumeSpecName: "utilities") pod "5dd671b4-cc04-4a87-a275-dea779856d29" (UID: "5dd671b4-cc04-4a87-a275-dea779856d29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.784472 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv" (OuterVolumeSpecName: "kube-api-access-wnwjv") pod "5dd671b4-cc04-4a87-a275-dea779856d29" (UID: "5dd671b4-cc04-4a87-a275-dea779856d29"). InnerVolumeSpecName "kube-api-access-wnwjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.823936 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dd671b4-cc04-4a87-a275-dea779856d29" (UID: "5dd671b4-cc04-4a87-a275-dea779856d29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.878346 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnwjv\" (UniqueName: \"kubernetes.io/projected/5dd671b4-cc04-4a87-a275-dea779856d29-kube-api-access-wnwjv\") on node \"crc\" DevicePath \"\"" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.878384 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:23:12 crc kubenswrapper[4842]: I0202 07:23:12.878397 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dd671b4-cc04-4a87-a275-dea779856d29-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162187 4842 generic.go:334] "Generic (PLEG): container finished" podID="5dd671b4-cc04-4a87-a275-dea779856d29" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" exitCode=0 Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162291 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb"} Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162342 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5vntr" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162358 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5vntr" event={"ID":"5dd671b4-cc04-4a87-a275-dea779856d29","Type":"ContainerDied","Data":"e5a81788552af8e52157e4072852e261209293db616808e28f3f1089dc73b9a0"} Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.162393 4842 scope.go:117] "RemoveContainer" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.192179 4842 scope.go:117] "RemoveContainer" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.226742 4842 scope.go:117] "RemoveContainer" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.234441 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.246628 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5vntr"] Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.276471 4842 scope.go:117] "RemoveContainer" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" Feb 02 07:23:13 crc kubenswrapper[4842]: E0202 07:23:13.280641 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb\": container with ID starting with 8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb not found: ID does not exist" containerID="8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.280708 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb"} err="failed to get container status \"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb\": rpc error: code = NotFound desc = could not find container \"8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb\": container with ID starting with 8643a609541066e97e50c3d4fce3029229aa3638ba291eb849983bdceb67ecfb not found: ID does not exist" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.280750 4842 scope.go:117] "RemoveContainer" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" Feb 02 07:23:13 crc kubenswrapper[4842]: E0202 07:23:13.281282 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3\": container with ID starting with ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3 not found: ID does not exist" containerID="ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.281325 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3"} err="failed to get container status \"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3\": rpc error: code = NotFound desc = could not find container \"ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3\": container with ID starting with ee413c34ffebda9ea6c4ca19141537cad8ad2a3933bd2c0f16d1c733361f30c3 not found: ID does not exist" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.281358 4842 scope.go:117] "RemoveContainer" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" Feb 02 07:23:13 crc kubenswrapper[4842]: E0202 07:23:13.281696 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315\": container with ID starting with 794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315 not found: ID does not exist" containerID="794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.281821 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315"} err="failed to get container status \"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315\": rpc error: code = NotFound desc = could not find container \"794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315\": container with ID starting with 794af9097d27e6c13f2acda19d0709f130908212fcd3ec959407fc246da1e315 not found: ID does not exist" Feb 02 07:23:13 crc kubenswrapper[4842]: I0202 07:23:13.450015 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" path="/var/lib/kubelet/pods/5dd671b4-cc04-4a87-a275-dea779856d29/volumes" Feb 02 07:23:19 crc kubenswrapper[4842]: I0202 07:23:19.433374 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:19 crc kubenswrapper[4842]: E0202 07:23:19.433869 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:32 crc kubenswrapper[4842]: I0202 07:23:32.433776 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:32 crc kubenswrapper[4842]: E0202 07:23:32.434950 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:45 crc kubenswrapper[4842]: I0202 07:23:45.439932 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:23:45 crc kubenswrapper[4842]: E0202 07:23:45.440945 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.904893 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:23:57 crc kubenswrapper[4842]: E0202 07:23:57.906277 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906312 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" Feb 02 07:23:57 crc kubenswrapper[4842]: E0202 07:23:57.906361 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-content" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906381 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-content" Feb 02 07:23:57 crc kubenswrapper[4842]: E0202 07:23:57.906427 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-utilities" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906444 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="extract-utilities" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.906786 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd671b4-cc04-4a87-a275-dea779856d29" containerName="registry-server" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.909091 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:57 crc kubenswrapper[4842]: I0202 07:23:57.941911 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.037983 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.038889 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.039044 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146072 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146191 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146237 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146676 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.146719 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.165779 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"community-operators-s582b\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.236498 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:23:58 crc kubenswrapper[4842]: I0202 07:23:58.716108 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:23:59 crc kubenswrapper[4842]: I0202 07:23:59.674419 4842 generic.go:334] "Generic (PLEG): container finished" podID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" exitCode=0 Feb 02 07:23:59 crc kubenswrapper[4842]: I0202 07:23:59.674486 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb"} Feb 02 07:23:59 crc kubenswrapper[4842]: I0202 07:23:59.674526 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerStarted","Data":"0718849fe0d8989a2e6ce298bf3ef3a019350d90cc19a1f763d7b226873cba7f"} Feb 02 07:24:00 crc kubenswrapper[4842]: I0202 07:24:00.433033 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:00 crc kubenswrapper[4842]: E0202 07:24:00.433565 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:00 crc kubenswrapper[4842]: I0202 07:24:00.685542 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerStarted","Data":"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756"} Feb 02 07:24:01 crc kubenswrapper[4842]: I0202 07:24:01.696268 4842 generic.go:334] "Generic (PLEG): container finished" podID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" exitCode=0 Feb 02 07:24:01 crc kubenswrapper[4842]: I0202 07:24:01.696355 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756"} Feb 02 07:24:02 crc kubenswrapper[4842]: I0202 07:24:02.706487 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerStarted","Data":"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096"} Feb 02 07:24:02 crc kubenswrapper[4842]: I0202 07:24:02.749148 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s582b" podStartSLOduration=3.337602704 podStartE2EDuration="5.749129686s" podCreationTimestamp="2026-02-02 07:23:57 +0000 UTC" firstStartedPulling="2026-02-02 07:23:59.67821433 +0000 UTC m=+2265.055482282" lastFinishedPulling="2026-02-02 07:24:02.089741322 +0000 UTC m=+2267.467009264" observedRunningTime="2026-02-02 07:24:02.740296858 +0000 UTC m=+2268.117564790" watchObservedRunningTime="2026-02-02 07:24:02.749129686 +0000 UTC m=+2268.126397608" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.237121 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.237502 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.281134 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.844319 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:08 crc kubenswrapper[4842]: I0202 07:24:08.906890 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:24:10 crc kubenswrapper[4842]: I0202 07:24:10.788192 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s582b" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" containerID="cri-o://f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" gracePeriod=2 Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.755871 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813017 4842 generic.go:334] "Generic (PLEG): container finished" podID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" exitCode=0 Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813070 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096"} Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813103 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s582b" event={"ID":"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1","Type":"ContainerDied","Data":"0718849fe0d8989a2e6ce298bf3ef3a019350d90cc19a1f763d7b226873cba7f"} Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813128 4842 scope.go:117] "RemoveContainer" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.813328 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s582b" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855541 4842 scope.go:117] "RemoveContainer" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855699 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") pod \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855786 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") pod \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.855902 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") pod \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\" (UID: \"abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1\") " Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.856706 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities" (OuterVolumeSpecName: "utilities") pod "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" (UID: "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.870057 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft" (OuterVolumeSpecName: "kube-api-access-2dmft") pod "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" (UID: "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1"). InnerVolumeSpecName "kube-api-access-2dmft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.890327 4842 scope.go:117] "RemoveContainer" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916020 4842 scope.go:117] "RemoveContainer" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" Feb 02 07:24:11 crc kubenswrapper[4842]: E0202 07:24:11.916359 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096\": container with ID starting with f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096 not found: ID does not exist" containerID="f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916408 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096"} err="failed to get container status \"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096\": rpc error: code = NotFound desc = could not find container \"f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096\": container with ID starting with f23a106208ae057522b7bc7b86b8efcff94f4def5e938d69adfaf2fd020a3096 not found: ID does not exist" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916432 4842 scope.go:117] "RemoveContainer" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" Feb 02 07:24:11 crc kubenswrapper[4842]: E0202 07:24:11.916916 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756\": container with ID starting with 21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756 not found: ID does not exist" containerID="21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.916946 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756"} err="failed to get container status \"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756\": rpc error: code = NotFound desc = could not find container \"21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756\": container with ID starting with 21852c2541ea27c7f40a78dfcadba80d293751eeb97c813b16dc3173d0f39756 not found: ID does not exist" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.917021 4842 scope.go:117] "RemoveContainer" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" Feb 02 07:24:11 crc kubenswrapper[4842]: E0202 07:24:11.917271 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb\": container with ID starting with af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb not found: ID does not exist" containerID="af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.917294 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb"} err="failed to get container status \"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb\": rpc error: code = NotFound desc = could not find container \"af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb\": container with ID starting with af204c5189d9b8847efa2cd19ae068ce6c295f017121b3df0224eb4bcee68cbb not found: ID does not exist" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.924146 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" (UID: "abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.957352 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.957388 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dmft\" (UniqueName: \"kubernetes.io/projected/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-kube-api-access-2dmft\") on node \"crc\" DevicePath \"\"" Feb 02 07:24:11 crc kubenswrapper[4842]: I0202 07:24:11.957402 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:24:12 crc kubenswrapper[4842]: I0202 07:24:12.172747 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:24:12 crc kubenswrapper[4842]: I0202 07:24:12.180962 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s582b"] Feb 02 07:24:13 crc kubenswrapper[4842]: I0202 07:24:13.447755 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" path="/var/lib/kubelet/pods/abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1/volumes" Feb 02 07:24:14 crc kubenswrapper[4842]: I0202 07:24:14.434327 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:14 crc kubenswrapper[4842]: E0202 07:24:14.434986 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:28 crc kubenswrapper[4842]: I0202 07:24:28.434586 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:28 crc kubenswrapper[4842]: E0202 07:24:28.435814 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:40 crc kubenswrapper[4842]: I0202 07:24:40.434262 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:40 crc kubenswrapper[4842]: E0202 07:24:40.435133 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:24:53 crc kubenswrapper[4842]: I0202 07:24:53.575612 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:24:53 crc kubenswrapper[4842]: E0202 07:24:53.576959 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:05 crc kubenswrapper[4842]: I0202 07:25:05.440814 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:05 crc kubenswrapper[4842]: E0202 07:25:05.441843 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:17 crc kubenswrapper[4842]: I0202 07:25:17.433043 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:17 crc kubenswrapper[4842]: E0202 07:25:17.433903 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:31 crc kubenswrapper[4842]: I0202 07:25:31.434371 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:31 crc kubenswrapper[4842]: E0202 07:25:31.435717 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:44 crc kubenswrapper[4842]: I0202 07:25:44.434475 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:44 crc kubenswrapper[4842]: E0202 07:25:44.435739 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:25:58 crc kubenswrapper[4842]: I0202 07:25:58.433350 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:25:58 crc kubenswrapper[4842]: E0202 07:25:58.434313 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:11 crc kubenswrapper[4842]: I0202 07:26:11.433352 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:11 crc kubenswrapper[4842]: E0202 07:26:11.434382 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:21 crc kubenswrapper[4842]: I0202 07:26:21.389637 4842 scope.go:117] "RemoveContainer" containerID="26b03de8273eeb8c731faea10ebe84f0a97c933934818912e8d4605f3c713f26" Feb 02 07:26:21 crc kubenswrapper[4842]: I0202 07:26:21.422890 4842 scope.go:117] "RemoveContainer" containerID="15961ca3966c5e19bf382f4ff38a45f3b4f496271c3a403b37983001d2953ade" Feb 02 07:26:21 crc kubenswrapper[4842]: I0202 07:26:21.453933 4842 scope.go:117] "RemoveContainer" containerID="54d29c0b963abf2e6cbe9930fdfb039211d0f6d3757608dff7e813a74402f5e9" Feb 02 07:26:23 crc kubenswrapper[4842]: I0202 07:26:23.433748 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:23 crc kubenswrapper[4842]: E0202 07:26:23.434545 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:37 crc kubenswrapper[4842]: I0202 07:26:37.439029 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:37 crc kubenswrapper[4842]: E0202 07:26:37.439969 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:26:50 crc kubenswrapper[4842]: I0202 07:26:50.434331 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:26:50 crc kubenswrapper[4842]: E0202 07:26:50.435190 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:27:03 crc kubenswrapper[4842]: I0202 07:27:03.434098 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:27:03 crc kubenswrapper[4842]: E0202 07:27:03.434788 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:27:14 crc kubenswrapper[4842]: I0202 07:27:14.434271 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:27:15 crc kubenswrapper[4842]: I0202 07:27:15.474999 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc"} Feb 02 07:29:42 crc kubenswrapper[4842]: I0202 07:29:42.146496 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:29:42 crc kubenswrapper[4842]: I0202 07:29:42.147113 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.155371 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 07:30:00 crc kubenswrapper[4842]: E0202 07:30:00.156295 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156311 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" Feb 02 07:30:00 crc kubenswrapper[4842]: E0202 07:30:00.156334 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-utilities" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156341 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-utilities" Feb 02 07:30:00 crc kubenswrapper[4842]: E0202 07:30:00.156352 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-content" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156361 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="extract-content" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.156541 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc1e7e9-2190-4bd7-98e3-94c14c9aa5c1" containerName="registry-server" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.157061 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.159573 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.160517 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.172548 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.349794 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.349865 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.349932 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.451739 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.451795 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.451829 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.452774 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.470369 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.471820 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"collect-profiles-29500290-4rjz7\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:00 crc kubenswrapper[4842]: I0202 07:30:00.527667 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.014794 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.910100 4842 generic.go:334] "Generic (PLEG): container finished" podID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerID="8dbf1ff40ae24c1cb278330205be0fe8707c50279bf4f5b00c195cfdd226a43f" exitCode=0 Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.910154 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" event={"ID":"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe","Type":"ContainerDied","Data":"8dbf1ff40ae24c1cb278330205be0fe8707c50279bf4f5b00c195cfdd226a43f"} Feb 02 07:30:01 crc kubenswrapper[4842]: I0202 07:30:01.910179 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" event={"ID":"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe","Type":"ContainerStarted","Data":"42234ac27dca0ee5645ba71bf9d5f3f8fe88e03e1320e7bf8885da12a745dbd4"} Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.263265 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.395890 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") pod \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.396378 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") pod \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.396543 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") pod \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\" (UID: \"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe\") " Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.397141 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" (UID: "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.403640 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" (UID: "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.404480 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l" (OuterVolumeSpecName: "kube-api-access-bxw7l") pod "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" (UID: "2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe"). InnerVolumeSpecName "kube-api-access-bxw7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.497787 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.497821 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxw7l\" (UniqueName: \"kubernetes.io/projected/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-kube-api-access-bxw7l\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.497831 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.933570 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" event={"ID":"2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe","Type":"ContainerDied","Data":"42234ac27dca0ee5645ba71bf9d5f3f8fe88e03e1320e7bf8885da12a745dbd4"} Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.933680 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42234ac27dca0ee5645ba71bf9d5f3f8fe88e03e1320e7bf8885da12a745dbd4" Feb 02 07:30:03 crc kubenswrapper[4842]: I0202 07:30:03.933742 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7" Feb 02 07:30:04 crc kubenswrapper[4842]: I0202 07:30:04.369526 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 07:30:04 crc kubenswrapper[4842]: I0202 07:30:04.379795 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500245-vpjnw"] Feb 02 07:30:05 crc kubenswrapper[4842]: I0202 07:30:05.446925 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b43b464-5623-46bb-8097-65b505d08960" path="/var/lib/kubelet/pods/5b43b464-5623-46bb-8097-65b505d08960/volumes" Feb 02 07:30:12 crc kubenswrapper[4842]: I0202 07:30:12.146544 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:30:12 crc kubenswrapper[4842]: I0202 07:30:12.147210 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:21 crc kubenswrapper[4842]: I0202 07:30:21.569771 4842 scope.go:117] "RemoveContainer" containerID="ba19112a26c109422079efb77e0284d9fe51d522c7191998e89b078a7d34963e" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.146420 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.147290 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.147362 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.148310 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.148404 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc" gracePeriod=600 Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.295111 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc" exitCode=0 Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.295267 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc"} Feb 02 07:30:42 crc kubenswrapper[4842]: I0202 07:30:42.295333 4842 scope.go:117] "RemoveContainer" containerID="a62de31c0336c56aa0f6c1326da184c3477e80f02982ca81e1b3cd86b8b619e6" Feb 02 07:30:43 crc kubenswrapper[4842]: I0202 07:30:43.308500 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda"} Feb 02 07:32:42 crc kubenswrapper[4842]: I0202 07:32:42.146189 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:32:42 crc kubenswrapper[4842]: I0202 07:32:42.146947 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:12 crc kubenswrapper[4842]: I0202 07:33:12.146025 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:33:12 crc kubenswrapper[4842]: I0202 07:33:12.146865 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.108650 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:38 crc kubenswrapper[4842]: E0202 07:33:38.110137 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerName="collect-profiles" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.110281 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerName="collect-profiles" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.111469 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" containerName="collect-profiles" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.113396 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.145086 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.232373 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.232756 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.232886 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.334611 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.334931 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.335013 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.336582 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.337120 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.377957 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"redhat-marketplace-hdmsn\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.453531 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:38 crc kubenswrapper[4842]: I0202 07:33:38.898594 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:38 crc kubenswrapper[4842]: W0202 07:33:38.905357 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode169b475_82e9_44a2_8cd1_9b1290cbc992.slice/crio-79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7 WatchSource:0}: Error finding container 79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7: Status 404 returned error can't find the container with id 79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7 Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.264154 4842 generic.go:334] "Generic (PLEG): container finished" podID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" exitCode=0 Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.264286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141"} Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.264362 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerStarted","Data":"79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7"} Feb 02 07:33:39 crc kubenswrapper[4842]: I0202 07:33:39.266881 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:33:41 crc kubenswrapper[4842]: I0202 07:33:41.284908 4842 generic.go:334] "Generic (PLEG): container finished" podID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" exitCode=0 Feb 02 07:33:41 crc kubenswrapper[4842]: I0202 07:33:41.285087 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327"} Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.146577 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.146961 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.147011 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.147674 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.147743 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" gracePeriod=600 Feb 02 07:33:42 crc kubenswrapper[4842]: E0202 07:33:42.270726 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.297877 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" exitCode=0 Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.297927 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda"} Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.298020 4842 scope.go:117] "RemoveContainer" containerID="a9931981a4064c9f36b17b435306ca3fae47f32d429034eb76a44a6791939efc" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.298638 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:33:42 crc kubenswrapper[4842]: E0202 07:33:42.298910 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.300540 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerStarted","Data":"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1"} Feb 02 07:33:42 crc kubenswrapper[4842]: I0202 07:33:42.367327 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hdmsn" podStartSLOduration=1.745830914 podStartE2EDuration="4.36730983s" podCreationTimestamp="2026-02-02 07:33:38 +0000 UTC" firstStartedPulling="2026-02-02 07:33:39.266439234 +0000 UTC m=+2844.643707176" lastFinishedPulling="2026-02-02 07:33:41.88791814 +0000 UTC m=+2847.265186092" observedRunningTime="2026-02-02 07:33:42.36448272 +0000 UTC m=+2847.741750642" watchObservedRunningTime="2026-02-02 07:33:42.36730983 +0000 UTC m=+2847.744577742" Feb 02 07:33:48 crc kubenswrapper[4842]: I0202 07:33:48.453883 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:48 crc kubenswrapper[4842]: I0202 07:33:48.454786 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:48 crc kubenswrapper[4842]: I0202 07:33:48.525471 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:49 crc kubenswrapper[4842]: I0202 07:33:49.448211 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:49 crc kubenswrapper[4842]: I0202 07:33:49.522944 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.374816 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hdmsn" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" containerID="cri-o://c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" gracePeriod=2 Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.783169 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.854493 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") pod \"e169b475-82e9-44a2-8cd1-9b1290cbc992\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.862322 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities" (OuterVolumeSpecName: "utilities") pod "e169b475-82e9-44a2-8cd1-9b1290cbc992" (UID: "e169b475-82e9-44a2-8cd1-9b1290cbc992"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.862565 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") pod \"e169b475-82e9-44a2-8cd1-9b1290cbc992\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.865494 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") pod \"e169b475-82e9-44a2-8cd1-9b1290cbc992\" (UID: \"e169b475-82e9-44a2-8cd1-9b1290cbc992\") " Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.865971 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.871780 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56" (OuterVolumeSpecName: "kube-api-access-dmg56") pod "e169b475-82e9-44a2-8cd1-9b1290cbc992" (UID: "e169b475-82e9-44a2-8cd1-9b1290cbc992"). InnerVolumeSpecName "kube-api-access-dmg56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.890101 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e169b475-82e9-44a2-8cd1-9b1290cbc992" (UID: "e169b475-82e9-44a2-8cd1-9b1290cbc992"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.966951 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e169b475-82e9-44a2-8cd1-9b1290cbc992-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:51 crc kubenswrapper[4842]: I0202 07:33:51.966980 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmg56\" (UniqueName: \"kubernetes.io/projected/e169b475-82e9-44a2-8cd1-9b1290cbc992-kube-api-access-dmg56\") on node \"crc\" DevicePath \"\"" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384032 4842 generic.go:334] "Generic (PLEG): container finished" podID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" exitCode=0 Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384091 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1"} Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384106 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmsn" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384125 4842 scope.go:117] "RemoveContainer" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.384115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmsn" event={"ID":"e169b475-82e9-44a2-8cd1-9b1290cbc992","Type":"ContainerDied","Data":"79f80c5b2cdb00639563a8b46652fde2f054fc2dd31e3e5a67f6a910c405d8d7"} Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.405760 4842 scope.go:117] "RemoveContainer" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.422808 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.429379 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmsn"] Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.440150 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.440398 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.443873 4842 scope.go:117] "RemoveContainer" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.469866 4842 scope.go:117] "RemoveContainer" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.470470 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1\": container with ID starting with c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1 not found: ID does not exist" containerID="c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.470539 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1"} err="failed to get container status \"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1\": rpc error: code = NotFound desc = could not find container \"c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1\": container with ID starting with c2a1f43d4b82fe9f34bf9ba9273993f86ef7adc90e54d4191541430b4ae8f5c1 not found: ID does not exist" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.470573 4842 scope.go:117] "RemoveContainer" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.471166 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327\": container with ID starting with d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327 not found: ID does not exist" containerID="d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.471205 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327"} err="failed to get container status \"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327\": rpc error: code = NotFound desc = could not find container \"d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327\": container with ID starting with d6dfe2c0ce97a9d24fa29db221abf0065e5b4a09a6ef65404d035be7e72e0327 not found: ID does not exist" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.471250 4842 scope.go:117] "RemoveContainer" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" Feb 02 07:33:52 crc kubenswrapper[4842]: E0202 07:33:52.471625 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141\": container with ID starting with f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141 not found: ID does not exist" containerID="f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141" Feb 02 07:33:52 crc kubenswrapper[4842]: I0202 07:33:52.471688 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141"} err="failed to get container status \"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141\": rpc error: code = NotFound desc = could not find container \"f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141\": container with ID starting with f58febd6dc85b54e8a4c723b3a4025b183513c26c8c6afa917221192d506a141 not found: ID does not exist" Feb 02 07:33:53 crc kubenswrapper[4842]: I0202 07:33:53.448563 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" path="/var/lib/kubelet/pods/e169b475-82e9-44a2-8cd1-9b1290cbc992/volumes" Feb 02 07:34:03 crc kubenswrapper[4842]: I0202 07:34:03.433785 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:03 crc kubenswrapper[4842]: E0202 07:34:03.434783 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:14 crc kubenswrapper[4842]: I0202 07:34:14.434631 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:14 crc kubenswrapper[4842]: E0202 07:34:14.436201 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.892252 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:15 crc kubenswrapper[4842]: E0202 07:34:15.893272 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-utilities" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893306 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-utilities" Feb 02 07:34:15 crc kubenswrapper[4842]: E0202 07:34:15.893367 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-content" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893389 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="extract-content" Feb 02 07:34:15 crc kubenswrapper[4842]: E0202 07:34:15.893414 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893429 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.893733 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e169b475-82e9-44a2-8cd1-9b1290cbc992" containerName="registry-server" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.896852 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:15 crc kubenswrapper[4842]: I0202 07:34:15.960813 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.046823 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.046913 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.046959 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148275 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148442 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148506 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148831 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.148959 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.173459 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"redhat-operators-2wsvb\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.268536 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.526354 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:16 crc kubenswrapper[4842]: I0202 07:34:16.626425 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerStarted","Data":"1f32afced739696c72206844574f32ea8877ddc224d52507ad2399e87f80a1d6"} Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.638979 4842 generic.go:334] "Generic (PLEG): container finished" podID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" exitCode=0 Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.639049 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c"} Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.693751 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.696402 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.713662 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.800156 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.800596 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.800676 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.902375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.902458 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.902528 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.904028 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.918747 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:17 crc kubenswrapper[4842]: I0202 07:34:17.964721 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"certified-operators-tcqpr\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:18 crc kubenswrapper[4842]: I0202 07:34:18.028661 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:18 crc kubenswrapper[4842]: I0202 07:34:18.548787 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:18 crc kubenswrapper[4842]: I0202 07:34:18.650575 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerStarted","Data":"83159ccc32f7be030ca5abe567af2ef0943590860edaa29b62e4d57bd3a56973"} Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.079445 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.081152 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.100512 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.146268 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.146395 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.146457 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.247801 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.247901 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.247974 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.248393 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.248531 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.281696 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"community-operators-qp6vd\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.407454 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.672962 4842 generic.go:334] "Generic (PLEG): container finished" podID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" exitCode=0 Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.673088 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85"} Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.678276 4842 generic.go:334] "Generic (PLEG): container finished" podID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" exitCode=0 Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.678324 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b"} Feb 02 07:34:19 crc kubenswrapper[4842]: I0202 07:34:19.905504 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:19 crc kubenswrapper[4842]: W0202 07:34:19.914425 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22349677_a0b4_43a2_9a43_61b9bbd55eed.slice/crio-1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf WatchSource:0}: Error finding container 1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf: Status 404 returned error can't find the container with id 1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.689953 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerStarted","Data":"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.692525 4842 generic.go:334] "Generic (PLEG): container finished" podID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" exitCode=0 Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.692579 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.692613 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerStarted","Data":"1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.696332 4842 generic.go:334] "Generic (PLEG): container finished" podID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" exitCode=0 Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.696356 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2"} Feb 02 07:34:20 crc kubenswrapper[4842]: I0202 07:34:20.720855 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2wsvb" podStartSLOduration=3.065599174 podStartE2EDuration="5.720836945s" podCreationTimestamp="2026-02-02 07:34:15 +0000 UTC" firstStartedPulling="2026-02-02 07:34:17.640838395 +0000 UTC m=+2883.018106337" lastFinishedPulling="2026-02-02 07:34:20.296076196 +0000 UTC m=+2885.673344108" observedRunningTime="2026-02-02 07:34:20.713303199 +0000 UTC m=+2886.090571121" watchObservedRunningTime="2026-02-02 07:34:20.720836945 +0000 UTC m=+2886.098104867" Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.705584 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerStarted","Data":"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b"} Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.707609 4842 generic.go:334] "Generic (PLEG): container finished" podID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" exitCode=0 Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.708397 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8"} Feb 02 07:34:21 crc kubenswrapper[4842]: I0202 07:34:21.748453 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tcqpr" podStartSLOduration=3.320714916 podStartE2EDuration="4.748430709s" podCreationTimestamp="2026-02-02 07:34:17 +0000 UTC" firstStartedPulling="2026-02-02 07:34:19.674858117 +0000 UTC m=+2885.052126029" lastFinishedPulling="2026-02-02 07:34:21.10257388 +0000 UTC m=+2886.479841822" observedRunningTime="2026-02-02 07:34:21.739206551 +0000 UTC m=+2887.116474463" watchObservedRunningTime="2026-02-02 07:34:21.748430709 +0000 UTC m=+2887.125698631" Feb 02 07:34:22 crc kubenswrapper[4842]: I0202 07:34:22.716670 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerStarted","Data":"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c"} Feb 02 07:34:22 crc kubenswrapper[4842]: I0202 07:34:22.742718 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qp6vd" podStartSLOduration=2.131209228 podStartE2EDuration="3.742695507s" podCreationTimestamp="2026-02-02 07:34:19 +0000 UTC" firstStartedPulling="2026-02-02 07:34:20.693896679 +0000 UTC m=+2886.071164591" lastFinishedPulling="2026-02-02 07:34:22.305382958 +0000 UTC m=+2887.682650870" observedRunningTime="2026-02-02 07:34:22.740269037 +0000 UTC m=+2888.117536999" watchObservedRunningTime="2026-02-02 07:34:22.742695507 +0000 UTC m=+2888.119963419" Feb 02 07:34:26 crc kubenswrapper[4842]: I0202 07:34:26.268800 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:26 crc kubenswrapper[4842]: I0202 07:34:26.271492 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:27 crc kubenswrapper[4842]: I0202 07:34:27.327125 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2wsvb" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" probeResult="failure" output=< Feb 02 07:34:27 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:34:27 crc kubenswrapper[4842]: > Feb 02 07:34:27 crc kubenswrapper[4842]: I0202 07:34:27.434724 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:27 crc kubenswrapper[4842]: E0202 07:34:27.435199 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.030597 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.030676 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.104106 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.836807 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:28 crc kubenswrapper[4842]: I0202 07:34:28.912387 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.408257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.408849 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.459488 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:29 crc kubenswrapper[4842]: I0202 07:34:29.811957 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:30 crc kubenswrapper[4842]: I0202 07:34:30.758393 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:30 crc kubenswrapper[4842]: I0202 07:34:30.779137 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tcqpr" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" containerID="cri-o://12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" gracePeriod=2 Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.722404 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790789 4842 generic.go:334] "Generic (PLEG): container finished" podID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" exitCode=0 Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790844 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcqpr" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790884 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b"} Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790962 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcqpr" event={"ID":"9a5e892e-8cde-49ea-ad01-14593db40e0e","Type":"ContainerDied","Data":"83159ccc32f7be030ca5abe567af2ef0943590860edaa29b62e4d57bd3a56973"} Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.790994 4842 scope.go:117] "RemoveContainer" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.793077 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qp6vd" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" containerID="cri-o://76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" gracePeriod=2 Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.812270 4842 scope.go:117] "RemoveContainer" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.837003 4842 scope.go:117] "RemoveContainer" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.872588 4842 scope.go:117] "RemoveContainer" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" Feb 02 07:34:31 crc kubenswrapper[4842]: E0202 07:34:31.873035 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b\": container with ID starting with 12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b not found: ID does not exist" containerID="12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873074 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b"} err="failed to get container status \"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b\": rpc error: code = NotFound desc = could not find container \"12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b\": container with ID starting with 12cd925754a46d4caca4ee280c327b6659863bffe20945b184df7d09d66a616b not found: ID does not exist" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873100 4842 scope.go:117] "RemoveContainer" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" Feb 02 07:34:31 crc kubenswrapper[4842]: E0202 07:34:31.873438 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2\": container with ID starting with 08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2 not found: ID does not exist" containerID="08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873463 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2"} err="failed to get container status \"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2\": rpc error: code = NotFound desc = could not find container \"08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2\": container with ID starting with 08c9cfe888034508575d594595d2a6b040714258a4e31d58abc3a38ab9e20ad2 not found: ID does not exist" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873476 4842 scope.go:117] "RemoveContainer" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" Feb 02 07:34:31 crc kubenswrapper[4842]: E0202 07:34:31.873680 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85\": container with ID starting with 040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85 not found: ID does not exist" containerID="040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.873695 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85"} err="failed to get container status \"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85\": rpc error: code = NotFound desc = could not find container \"040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85\": container with ID starting with 040aad422592f01aeed3762b9f8803e85cdfa4536c1d5280744995de01e71d85 not found: ID does not exist" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.880901 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") pod \"9a5e892e-8cde-49ea-ad01-14593db40e0e\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.880996 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") pod \"9a5e892e-8cde-49ea-ad01-14593db40e0e\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.881575 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") pod \"9a5e892e-8cde-49ea-ad01-14593db40e0e\" (UID: \"9a5e892e-8cde-49ea-ad01-14593db40e0e\") " Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.882570 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities" (OuterVolumeSpecName: "utilities") pod "9a5e892e-8cde-49ea-ad01-14593db40e0e" (UID: "9a5e892e-8cde-49ea-ad01-14593db40e0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.886828 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4" (OuterVolumeSpecName: "kube-api-access-kl9w4") pod "9a5e892e-8cde-49ea-ad01-14593db40e0e" (UID: "9a5e892e-8cde-49ea-ad01-14593db40e0e"). InnerVolumeSpecName "kube-api-access-kl9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.941995 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a5e892e-8cde-49ea-ad01-14593db40e0e" (UID: "9a5e892e-8cde-49ea-ad01-14593db40e0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.983125 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl9w4\" (UniqueName: \"kubernetes.io/projected/9a5e892e-8cde-49ea-ad01-14593db40e0e-kube-api-access-kl9w4\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.983155 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:31 crc kubenswrapper[4842]: I0202 07:34:31.983166 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a5e892e-8cde-49ea-ad01-14593db40e0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.167940 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.173504 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tcqpr"] Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.744926 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799694 4842 generic.go:334] "Generic (PLEG): container finished" podID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" exitCode=0 Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799757 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c"} Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799776 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qp6vd" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799781 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qp6vd" event={"ID":"22349677-a0b4-43a2-9a43-61b9bbd55eed","Type":"ContainerDied","Data":"1adcdbd0c81ef4178cdbf4dee7a1e44951efd5b8a20d82e6aa0762bba814c1bf"} Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.799797 4842 scope.go:117] "RemoveContainer" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.817905 4842 scope.go:117] "RemoveContainer" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.837732 4842 scope.go:117] "RemoveContainer" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.869269 4842 scope.go:117] "RemoveContainer" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" Feb 02 07:34:32 crc kubenswrapper[4842]: E0202 07:34:32.869732 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c\": container with ID starting with 76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c not found: ID does not exist" containerID="76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.869761 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c"} err="failed to get container status \"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c\": rpc error: code = NotFound desc = could not find container \"76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c\": container with ID starting with 76deb60b5abc77a6556a5bbf7cb59524c453e7cfb1fea0d4ce4eb44a05e6d01c not found: ID does not exist" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.869785 4842 scope.go:117] "RemoveContainer" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" Feb 02 07:34:32 crc kubenswrapper[4842]: E0202 07:34:32.870182 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8\": container with ID starting with 9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8 not found: ID does not exist" containerID="9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.870262 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8"} err="failed to get container status \"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8\": rpc error: code = NotFound desc = could not find container \"9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8\": container with ID starting with 9ddd7e7d6ec60c2c3a65615f5dd58582f6b676141e2bddb1388f27f7acc6efd8 not found: ID does not exist" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.870310 4842 scope.go:117] "RemoveContainer" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" Feb 02 07:34:32 crc kubenswrapper[4842]: E0202 07:34:32.870648 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6\": container with ID starting with 458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6 not found: ID does not exist" containerID="458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.870673 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6"} err="failed to get container status \"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6\": rpc error: code = NotFound desc = could not find container \"458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6\": container with ID starting with 458f2af06b0b6dae87a01806bdcbfc5c8535b49d28661ed9aff39b5b756278a6 not found: ID does not exist" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.896871 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") pod \"22349677-a0b4-43a2-9a43-61b9bbd55eed\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.897068 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") pod \"22349677-a0b4-43a2-9a43-61b9bbd55eed\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.897271 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") pod \"22349677-a0b4-43a2-9a43-61b9bbd55eed\" (UID: \"22349677-a0b4-43a2-9a43-61b9bbd55eed\") " Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.898843 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities" (OuterVolumeSpecName: "utilities") pod "22349677-a0b4-43a2-9a43-61b9bbd55eed" (UID: "22349677-a0b4-43a2-9a43-61b9bbd55eed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.905724 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb" (OuterVolumeSpecName: "kube-api-access-cqltb") pod "22349677-a0b4-43a2-9a43-61b9bbd55eed" (UID: "22349677-a0b4-43a2-9a43-61b9bbd55eed"). InnerVolumeSpecName "kube-api-access-cqltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:34:32 crc kubenswrapper[4842]: I0202 07:34:32.946440 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22349677-a0b4-43a2-9a43-61b9bbd55eed" (UID: "22349677-a0b4-43a2-9a43-61b9bbd55eed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:32.999969 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqltb\" (UniqueName: \"kubernetes.io/projected/22349677-a0b4-43a2-9a43-61b9bbd55eed-kube-api-access-cqltb\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.000038 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.000058 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22349677-a0b4-43a2-9a43-61b9bbd55eed-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.157154 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.168175 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qp6vd"] Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.459728 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" path="/var/lib/kubelet/pods/22349677-a0b4-43a2-9a43-61b9bbd55eed/volumes" Feb 02 07:34:33 crc kubenswrapper[4842]: I0202 07:34:33.461526 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" path="/var/lib/kubelet/pods/9a5e892e-8cde-49ea-ad01-14593db40e0e/volumes" Feb 02 07:34:36 crc kubenswrapper[4842]: I0202 07:34:36.345450 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:36 crc kubenswrapper[4842]: I0202 07:34:36.414393 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:37 crc kubenswrapper[4842]: I0202 07:34:37.162185 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:37 crc kubenswrapper[4842]: I0202 07:34:37.849359 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2wsvb" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" containerID="cri-o://78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" gracePeriod=2 Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.324982 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.490588 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") pod \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.490734 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") pod \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.490812 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") pod \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\" (UID: \"ab4626e6-200f-4cd6-937d-4eb7cf9911ab\") " Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.492883 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities" (OuterVolumeSpecName: "utilities") pod "ab4626e6-200f-4cd6-937d-4eb7cf9911ab" (UID: "ab4626e6-200f-4cd6-937d-4eb7cf9911ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.496697 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg" (OuterVolumeSpecName: "kube-api-access-mnztg") pod "ab4626e6-200f-4cd6-937d-4eb7cf9911ab" (UID: "ab4626e6-200f-4cd6-937d-4eb7cf9911ab"). InnerVolumeSpecName "kube-api-access-mnztg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.592752 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.592780 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnztg\" (UniqueName: \"kubernetes.io/projected/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-kube-api-access-mnztg\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.669436 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab4626e6-200f-4cd6-937d-4eb7cf9911ab" (UID: "ab4626e6-200f-4cd6-937d-4eb7cf9911ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.693924 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab4626e6-200f-4cd6-937d-4eb7cf9911ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862069 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2wsvb" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862132 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008"} Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862203 4842 scope.go:117] "RemoveContainer" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862023 4842 generic.go:334] "Generic (PLEG): container finished" podID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" exitCode=0 Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.862535 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2wsvb" event={"ID":"ab4626e6-200f-4cd6-937d-4eb7cf9911ab","Type":"ContainerDied","Data":"1f32afced739696c72206844574f32ea8877ddc224d52507ad2399e87f80a1d6"} Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.901704 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.905081 4842 scope.go:117] "RemoveContainer" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.907803 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2wsvb"] Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.922838 4842 scope.go:117] "RemoveContainer" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.966453 4842 scope.go:117] "RemoveContainer" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" Feb 02 07:34:38 crc kubenswrapper[4842]: E0202 07:34:38.966997 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008\": container with ID starting with 78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008 not found: ID does not exist" containerID="78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967046 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008"} err="failed to get container status \"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008\": rpc error: code = NotFound desc = could not find container \"78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008\": container with ID starting with 78cc880c748f040750a27d09076e66f4d53b57a35bd9f70291d80b1021605008 not found: ID does not exist" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967070 4842 scope.go:117] "RemoveContainer" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" Feb 02 07:34:38 crc kubenswrapper[4842]: E0202 07:34:38.967465 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b\": container with ID starting with 4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b not found: ID does not exist" containerID="4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967489 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b"} err="failed to get container status \"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b\": rpc error: code = NotFound desc = could not find container \"4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b\": container with ID starting with 4a728162b812c701e40c30b6bfdb1e59fe43e20a5f66c0dea0e6c490f5f7c43b not found: ID does not exist" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967504 4842 scope.go:117] "RemoveContainer" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" Feb 02 07:34:38 crc kubenswrapper[4842]: E0202 07:34:38.967892 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c\": container with ID starting with f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c not found: ID does not exist" containerID="f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c" Feb 02 07:34:38 crc kubenswrapper[4842]: I0202 07:34:38.967964 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c"} err="failed to get container status \"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c\": rpc error: code = NotFound desc = could not find container \"f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c\": container with ID starting with f2cc66db62cf6e553c069a58c3115b94b137acad42647a9788e36a837c71756c not found: ID does not exist" Feb 02 07:34:39 crc kubenswrapper[4842]: I0202 07:34:39.467685 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" path="/var/lib/kubelet/pods/ab4626e6-200f-4cd6-937d-4eb7cf9911ab/volumes" Feb 02 07:34:40 crc kubenswrapper[4842]: I0202 07:34:40.432982 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:40 crc kubenswrapper[4842]: E0202 07:34:40.433609 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:34:53 crc kubenswrapper[4842]: I0202 07:34:53.434440 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:34:53 crc kubenswrapper[4842]: E0202 07:34:53.435578 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:07 crc kubenswrapper[4842]: I0202 07:35:07.433562 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:07 crc kubenswrapper[4842]: E0202 07:35:07.434945 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:19 crc kubenswrapper[4842]: I0202 07:35:19.434151 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:19 crc kubenswrapper[4842]: E0202 07:35:19.435286 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:33 crc kubenswrapper[4842]: I0202 07:35:33.433702 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:33 crc kubenswrapper[4842]: E0202 07:35:33.434399 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:47 crc kubenswrapper[4842]: I0202 07:35:47.435174 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:47 crc kubenswrapper[4842]: E0202 07:35:47.436577 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:35:59 crc kubenswrapper[4842]: I0202 07:35:59.434413 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:35:59 crc kubenswrapper[4842]: E0202 07:35:59.435366 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:14 crc kubenswrapper[4842]: I0202 07:36:14.434350 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:14 crc kubenswrapper[4842]: E0202 07:36:14.435504 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:28 crc kubenswrapper[4842]: I0202 07:36:28.434624 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:28 crc kubenswrapper[4842]: E0202 07:36:28.435651 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:41 crc kubenswrapper[4842]: I0202 07:36:41.434538 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:41 crc kubenswrapper[4842]: E0202 07:36:41.435795 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:36:54 crc kubenswrapper[4842]: I0202 07:36:54.434285 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:36:54 crc kubenswrapper[4842]: E0202 07:36:54.434943 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:09 crc kubenswrapper[4842]: I0202 07:37:09.434006 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:09 crc kubenswrapper[4842]: E0202 07:37:09.434980 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:23 crc kubenswrapper[4842]: I0202 07:37:23.433882 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:23 crc kubenswrapper[4842]: E0202 07:37:23.434686 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:34 crc kubenswrapper[4842]: I0202 07:37:34.434156 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:34 crc kubenswrapper[4842]: E0202 07:37:34.435246 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:45 crc kubenswrapper[4842]: I0202 07:37:45.441033 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:45 crc kubenswrapper[4842]: E0202 07:37:45.441923 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:37:56 crc kubenswrapper[4842]: I0202 07:37:56.433968 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:37:56 crc kubenswrapper[4842]: E0202 07:37:56.435554 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:10 crc kubenswrapper[4842]: I0202 07:38:10.433760 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:10 crc kubenswrapper[4842]: E0202 07:38:10.434413 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:22 crc kubenswrapper[4842]: I0202 07:38:22.434423 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:22 crc kubenswrapper[4842]: E0202 07:38:22.435303 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:36 crc kubenswrapper[4842]: I0202 07:38:36.434029 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:36 crc kubenswrapper[4842]: E0202 07:38:36.435287 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:38:47 crc kubenswrapper[4842]: I0202 07:38:47.433961 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:38:48 crc kubenswrapper[4842]: I0202 07:38:48.109309 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d"} Feb 02 07:41:12 crc kubenswrapper[4842]: I0202 07:41:12.146104 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:41:12 crc kubenswrapper[4842]: I0202 07:41:12.147434 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:41:42 crc kubenswrapper[4842]: I0202 07:41:42.146533 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:41:42 crc kubenswrapper[4842]: I0202 07:41:42.147201 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.146515 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.147263 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.147328 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.148289 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.148384 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d" gracePeriod=600 Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.968945 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d" exitCode=0 Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.969038 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d"} Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.969330 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a"} Feb 02 07:42:12 crc kubenswrapper[4842]: I0202 07:42:12.969352 4842 scope.go:117] "RemoveContainer" containerID="53b1928a681726568eb304a3af92561c2ace9a968875e2fca9e2ff4aa6598bda" Feb 02 07:44:12 crc kubenswrapper[4842]: I0202 07:44:12.146857 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:44:12 crc kubenswrapper[4842]: I0202 07:44:12.147657 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.819569 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.822374 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.822556 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.822686 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.822816 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.822958 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823085 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.823246 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823381 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.823563 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823703 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.823831 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.823945 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.824070 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.824240 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-content" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.824401 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.824519 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: E0202 07:44:23.824654 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.824779 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="extract-utilities" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.825138 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="22349677-a0b4-43a2-9a43-61b9bbd55eed" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.825308 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4626e6-200f-4cd6-937d-4eb7cf9911ab" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.825446 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5e892e-8cde-49ea-ad01-14593db40e0e" containerName="registry-server" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.827379 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.833772 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.942930 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-utilities\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.943005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8d2w\" (UniqueName: \"kubernetes.io/projected/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-kube-api-access-l8d2w\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:23 crc kubenswrapper[4842]: I0202 07:44:23.943086 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-catalog-content\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.044201 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-catalog-content\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.044314 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-utilities\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.044362 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8d2w\" (UniqueName: \"kubernetes.io/projected/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-kube-api-access-l8d2w\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.045323 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-utilities\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.045569 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-catalog-content\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.065798 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8d2w\" (UniqueName: \"kubernetes.io/projected/6af4d552-478d-4a9f-8fcb-8a4b30a29f61-kube-api-access-l8d2w\") pod \"community-operators-d9hpw\" (UID: \"6af4d552-478d-4a9f-8fcb-8a4b30a29f61\") " pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.162359 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:24 crc kubenswrapper[4842]: I0202 07:44:24.662513 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:24 crc kubenswrapper[4842]: W0202 07:44:24.673964 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6af4d552_478d_4a9f_8fcb_8a4b30a29f61.slice/crio-60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a WatchSource:0}: Error finding container 60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a: Status 404 returned error can't find the container with id 60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.173047 4842 generic.go:334] "Generic (PLEG): container finished" podID="6af4d552-478d-4a9f-8fcb-8a4b30a29f61" containerID="13d4228704a796faa071a0142ccf878d2f1cc2ea93c1f3316e9ce309bc8be98e" exitCode=0 Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.173105 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerDied","Data":"13d4228704a796faa071a0142ccf878d2f1cc2ea93c1f3316e9ce309bc8be98e"} Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.173146 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerStarted","Data":"60c650e2c5c0dd4deceef32acedcced2c21f22e268da13b482e9c1b7e96dab5a"} Feb 02 07:44:25 crc kubenswrapper[4842]: I0202 07:44:25.176365 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:44:29 crc kubenswrapper[4842]: I0202 07:44:29.208116 4842 generic.go:334] "Generic (PLEG): container finished" podID="6af4d552-478d-4a9f-8fcb-8a4b30a29f61" containerID="d5522e370ac81656c4e7bcc3c2662c52297296f66ad1208fdb616b69ac366536" exitCode=0 Feb 02 07:44:29 crc kubenswrapper[4842]: I0202 07:44:29.208265 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerDied","Data":"d5522e370ac81656c4e7bcc3c2662c52297296f66ad1208fdb616b69ac366536"} Feb 02 07:44:30 crc kubenswrapper[4842]: I0202 07:44:30.221397 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d9hpw" event={"ID":"6af4d552-478d-4a9f-8fcb-8a4b30a29f61","Type":"ContainerStarted","Data":"97c92a333bf2b3560732267b9b0e9d19422683f2fb0af959ea259e7a17893cde"} Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.163280 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.163648 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.242635 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.277929 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d9hpw" podStartSLOduration=6.8101695079999995 podStartE2EDuration="11.277909122s" podCreationTimestamp="2026-02-02 07:44:23 +0000 UTC" firstStartedPulling="2026-02-02 07:44:25.175870213 +0000 UTC m=+3490.553138165" lastFinishedPulling="2026-02-02 07:44:29.643609867 +0000 UTC m=+3495.020877779" observedRunningTime="2026-02-02 07:44:30.251340509 +0000 UTC m=+3495.628608481" watchObservedRunningTime="2026-02-02 07:44:34.277909122 +0000 UTC m=+3499.655177034" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.322014 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d9hpw" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.397788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d9hpw"] Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.493949 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.494250 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7hg8l" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" containerID="cri-o://05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" gracePeriod=2 Feb 02 07:44:34 crc kubenswrapper[4842]: E0202 07:44:34.696915 4842 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d21de2_d86f_4434_a132_ac1e81b63377.slice/crio-conmon-05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52.scope\": RecentStats: unable to find data in memory cache]" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.910614 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.942694 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") pod \"79d21de2-d86f-4434-a132-ac1e81b63377\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.942817 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") pod \"79d21de2-d86f-4434-a132-ac1e81b63377\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.942856 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") pod \"79d21de2-d86f-4434-a132-ac1e81b63377\" (UID: \"79d21de2-d86f-4434-a132-ac1e81b63377\") " Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.944530 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities" (OuterVolumeSpecName: "utilities") pod "79d21de2-d86f-4434-a132-ac1e81b63377" (UID: "79d21de2-d86f-4434-a132-ac1e81b63377"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:34 crc kubenswrapper[4842]: I0202 07:44:34.968881 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4" (OuterVolumeSpecName: "kube-api-access-dfhk4") pod "79d21de2-d86f-4434-a132-ac1e81b63377" (UID: "79d21de2-d86f-4434-a132-ac1e81b63377"). InnerVolumeSpecName "kube-api-access-dfhk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.002521 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79d21de2-d86f-4434-a132-ac1e81b63377" (UID: "79d21de2-d86f-4434-a132-ac1e81b63377"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.043512 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.043543 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d21de2-d86f-4434-a132-ac1e81b63377-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.043557 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfhk4\" (UniqueName: \"kubernetes.io/projected/79d21de2-d86f-4434-a132-ac1e81b63377-kube-api-access-dfhk4\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.269851 4842 generic.go:334] "Generic (PLEG): container finished" podID="79d21de2-d86f-4434-a132-ac1e81b63377" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" exitCode=0 Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.269936 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7hg8l" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.269977 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52"} Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.270056 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7hg8l" event={"ID":"79d21de2-d86f-4434-a132-ac1e81b63377","Type":"ContainerDied","Data":"2d2ab29782781bce630b9b1ec33d723639705b917f6488a85a84e3a08847027a"} Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.270107 4842 scope.go:117] "RemoveContainer" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.325375 4842 scope.go:117] "RemoveContainer" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.336027 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.346779 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7hg8l"] Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.352131 4842 scope.go:117] "RemoveContainer" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.377208 4842 scope.go:117] "RemoveContainer" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" Feb 02 07:44:35 crc kubenswrapper[4842]: E0202 07:44:35.377760 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52\": container with ID starting with 05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52 not found: ID does not exist" containerID="05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.377814 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52"} err="failed to get container status \"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52\": rpc error: code = NotFound desc = could not find container \"05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52\": container with ID starting with 05f81fbc41c88618dbdb1297884184318cd51122953e7bb58e8a90a529418d52 not found: ID does not exist" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.377844 4842 scope.go:117] "RemoveContainer" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" Feb 02 07:44:35 crc kubenswrapper[4842]: E0202 07:44:35.378652 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809\": container with ID starting with 0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809 not found: ID does not exist" containerID="0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.378712 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809"} err="failed to get container status \"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809\": rpc error: code = NotFound desc = could not find container \"0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809\": container with ID starting with 0c604a9a803c123935122e17db80cd4fc1952e426889feeace08fef5229b2809 not found: ID does not exist" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.378753 4842 scope.go:117] "RemoveContainer" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" Feb 02 07:44:35 crc kubenswrapper[4842]: E0202 07:44:35.379439 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74\": container with ID starting with 29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74 not found: ID does not exist" containerID="29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.379474 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74"} err="failed to get container status \"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74\": rpc error: code = NotFound desc = could not find container \"29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74\": container with ID starting with 29c357120ba115af17ef113f35ab6e72d332e8c44501980f8bf1853410154a74 not found: ID does not exist" Feb 02 07:44:35 crc kubenswrapper[4842]: I0202 07:44:35.446897 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" path="/var/lib/kubelet/pods/79d21de2-d86f-4434-a132-ac1e81b63377/volumes" Feb 02 07:44:42 crc kubenswrapper[4842]: I0202 07:44:42.146577 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:44:42 crc kubenswrapper[4842]: I0202 07:44:42.147482 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403352 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:43 crc kubenswrapper[4842]: E0202 07:44:43.403710 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403725 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" Feb 02 07:44:43 crc kubenswrapper[4842]: E0202 07:44:43.403740 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-content" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403748 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-content" Feb 02 07:44:43 crc kubenswrapper[4842]: E0202 07:44:43.403762 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-utilities" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403772 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="extract-utilities" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.403942 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d21de2-d86f-4434-a132-ac1e81b63377" containerName="registry-server" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.405508 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.419547 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.431682 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.431759 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.431819 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.532920 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.533861 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.534735 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.535611 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.535674 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.560608 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"redhat-marketplace-s5zkp\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:43 crc kubenswrapper[4842]: I0202 07:44:43.742750 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:44 crc kubenswrapper[4842]: I0202 07:44:44.272056 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:44 crc kubenswrapper[4842]: I0202 07:44:44.357843 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerStarted","Data":"9a4c47ec4eecaaf32b1d0cd388f9d248ff0d88afb81bbd7742ee19fbee20f67d"} Feb 02 07:44:45 crc kubenswrapper[4842]: I0202 07:44:45.373342 4842 generic.go:334] "Generic (PLEG): container finished" podID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" exitCode=0 Feb 02 07:44:45 crc kubenswrapper[4842]: I0202 07:44:45.373460 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126"} Feb 02 07:44:47 crc kubenswrapper[4842]: I0202 07:44:47.413556 4842 generic.go:334] "Generic (PLEG): container finished" podID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" exitCode=0 Feb 02 07:44:47 crc kubenswrapper[4842]: I0202 07:44:47.414175 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04"} Feb 02 07:44:48 crc kubenswrapper[4842]: I0202 07:44:48.424198 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerStarted","Data":"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb"} Feb 02 07:44:48 crc kubenswrapper[4842]: I0202 07:44:48.455344 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s5zkp" podStartSLOduration=2.9526222669999997 podStartE2EDuration="5.455317805s" podCreationTimestamp="2026-02-02 07:44:43 +0000 UTC" firstStartedPulling="2026-02-02 07:44:45.375816601 +0000 UTC m=+3510.753084543" lastFinishedPulling="2026-02-02 07:44:47.878512129 +0000 UTC m=+3513.255780081" observedRunningTime="2026-02-02 07:44:48.448615879 +0000 UTC m=+3513.825883801" watchObservedRunningTime="2026-02-02 07:44:48.455317805 +0000 UTC m=+3513.832585747" Feb 02 07:44:53 crc kubenswrapper[4842]: I0202 07:44:53.743146 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:53 crc kubenswrapper[4842]: I0202 07:44:53.743802 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:53 crc kubenswrapper[4842]: I0202 07:44:53.821257 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:54 crc kubenswrapper[4842]: I0202 07:44:54.556277 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:55 crc kubenswrapper[4842]: I0202 07:44:55.027533 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:56 crc kubenswrapper[4842]: I0202 07:44:56.496899 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s5zkp" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" containerID="cri-o://dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" gracePeriod=2 Feb 02 07:44:56 crc kubenswrapper[4842]: I0202 07:44:56.894776 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.070437 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") pod \"c164b1b9-c3c4-403d-9000-6a49460db9de\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.070915 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") pod \"c164b1b9-c3c4-403d-9000-6a49460db9de\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.071009 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") pod \"c164b1b9-c3c4-403d-9000-6a49460db9de\" (UID: \"c164b1b9-c3c4-403d-9000-6a49460db9de\") " Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.072177 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities" (OuterVolumeSpecName: "utilities") pod "c164b1b9-c3c4-403d-9000-6a49460db9de" (UID: "c164b1b9-c3c4-403d-9000-6a49460db9de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.080409 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj" (OuterVolumeSpecName: "kube-api-access-k9lbj") pod "c164b1b9-c3c4-403d-9000-6a49460db9de" (UID: "c164b1b9-c3c4-403d-9000-6a49460db9de"). InnerVolumeSpecName "kube-api-access-k9lbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.114984 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c164b1b9-c3c4-403d-9000-6a49460db9de" (UID: "c164b1b9-c3c4-403d-9000-6a49460db9de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.174575 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lbj\" (UniqueName: \"kubernetes.io/projected/c164b1b9-c3c4-403d-9000-6a49460db9de-kube-api-access-k9lbj\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.174853 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.174878 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c164b1b9-c3c4-403d-9000-6a49460db9de-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508745 4842 generic.go:334] "Generic (PLEG): container finished" podID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" exitCode=0 Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508820 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb"} Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508877 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s5zkp" event={"ID":"c164b1b9-c3c4-403d-9000-6a49460db9de","Type":"ContainerDied","Data":"9a4c47ec4eecaaf32b1d0cd388f9d248ff0d88afb81bbd7742ee19fbee20f67d"} Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508920 4842 scope.go:117] "RemoveContainer" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.508991 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s5zkp" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.543309 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.549501 4842 scope.go:117] "RemoveContainer" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.550915 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s5zkp"] Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.576912 4842 scope.go:117] "RemoveContainer" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.605792 4842 scope.go:117] "RemoveContainer" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" Feb 02 07:44:57 crc kubenswrapper[4842]: E0202 07:44:57.606325 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb\": container with ID starting with dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb not found: ID does not exist" containerID="dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606364 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb"} err="failed to get container status \"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb\": rpc error: code = NotFound desc = could not find container \"dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb\": container with ID starting with dea646af9bd267fbe69b814a5ac440cb747701d180d86f0a889410c2f6550cfb not found: ID does not exist" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606392 4842 scope.go:117] "RemoveContainer" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" Feb 02 07:44:57 crc kubenswrapper[4842]: E0202 07:44:57.606769 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04\": container with ID starting with afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04 not found: ID does not exist" containerID="afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606799 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04"} err="failed to get container status \"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04\": rpc error: code = NotFound desc = could not find container \"afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04\": container with ID starting with afb2f2590980251f385bfa41864d1ec6439d5ad46cfc99d6fba6cb46436aeb04 not found: ID does not exist" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.606819 4842 scope.go:117] "RemoveContainer" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" Feb 02 07:44:57 crc kubenswrapper[4842]: E0202 07:44:57.607057 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126\": container with ID starting with 0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126 not found: ID does not exist" containerID="0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126" Feb 02 07:44:57 crc kubenswrapper[4842]: I0202 07:44:57.607089 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126"} err="failed to get container status \"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126\": rpc error: code = NotFound desc = could not find container \"0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126\": container with ID starting with 0060c14970e9770ee15974169d1a16a0f40ec75bb06def287ad54921de0bc126 not found: ID does not exist" Feb 02 07:44:59 crc kubenswrapper[4842]: I0202 07:44:59.450257 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" path="/var/lib/kubelet/pods/c164b1b9-c3c4-403d-9000-6a49460db9de/volumes" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154505 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 07:45:00 crc kubenswrapper[4842]: E0202 07:45:00.154868 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154885 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-utilities" Feb 02 07:45:00 crc kubenswrapper[4842]: E0202 07:45:00.154913 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154921 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4842]: E0202 07:45:00.154939 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.154947 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="extract-content" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.155112 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c164b1b9-c3c4-403d-9000-6a49460db9de" containerName="registry-server" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.155748 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.158535 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.158786 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.168457 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.340054 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.340134 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.340231 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441047 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441121 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441197 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.441843 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.453971 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.468029 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"collect-profiles-29500305-fx7vn\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.505869 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:00 crc kubenswrapper[4842]: I0202 07:45:00.934419 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 07:45:00 crc kubenswrapper[4842]: W0202 07:45:00.946379 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aa0f9fa_efa5_4afa_bce6_88ca1eeef6b6.slice/crio-85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9 WatchSource:0}: Error finding container 85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9: Status 404 returned error can't find the container with id 85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9 Feb 02 07:45:01 crc kubenswrapper[4842]: I0202 07:45:01.563286 4842 generic.go:334] "Generic (PLEG): container finished" podID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerID="9c91867e37901f6b77d290214bde0cb71563f9ff02b28875bfa2c96b8d680083" exitCode=0 Feb 02 07:45:01 crc kubenswrapper[4842]: I0202 07:45:01.563338 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" event={"ID":"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6","Type":"ContainerDied","Data":"9c91867e37901f6b77d290214bde0cb71563f9ff02b28875bfa2c96b8d680083"} Feb 02 07:45:01 crc kubenswrapper[4842]: I0202 07:45:01.563516 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" event={"ID":"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6","Type":"ContainerStarted","Data":"85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9"} Feb 02 07:45:02 crc kubenswrapper[4842]: I0202 07:45:02.953864 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.083019 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") pod \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.083130 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") pod \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.083173 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") pod \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\" (UID: \"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6\") " Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.084138 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" (UID: "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.088214 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" (UID: "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.088368 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz" (OuterVolumeSpecName: "kube-api-access-klmzz") pod "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" (UID: "7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6"). InnerVolumeSpecName "kube-api-access-klmzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.184307 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.184340 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.184352 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klmzz\" (UniqueName: \"kubernetes.io/projected/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6-kube-api-access-klmzz\") on node \"crc\" DevicePath \"\"" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.578429 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" event={"ID":"7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6","Type":"ContainerDied","Data":"85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9"} Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.578781 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85f47b9014f6d0f4127be20e8d5d0f4d7db7ebb54a0a6452db42303cd38497a9" Feb 02 07:45:03 crc kubenswrapper[4842]: I0202 07:45:03.578833 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn" Feb 02 07:45:04 crc kubenswrapper[4842]: I0202 07:45:04.064883 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:45:04 crc kubenswrapper[4842]: I0202 07:45:04.069762 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500260-8hlgn"] Feb 02 07:45:05 crc kubenswrapper[4842]: I0202 07:45:05.441844 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da36ad95-63f3-4cfb-8da7-96b730ccc79b" path="/var/lib/kubelet/pods/da36ad95-63f3-4cfb-8da7-96b730ccc79b/volumes" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.146517 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.147469 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.147561 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.148731 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.148866 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" gracePeriod=600 Feb 02 07:45:12 crc kubenswrapper[4842]: E0202 07:45:12.280387 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.670835 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" exitCode=0 Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.670861 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a"} Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.670904 4842 scope.go:117] "RemoveContainer" containerID="d04892349eecb502e1841b1180408fe7aa97060cc4ee71a56829833e1ef84e6d" Feb 02 07:45:12 crc kubenswrapper[4842]: I0202 07:45:12.671807 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:12 crc kubenswrapper[4842]: E0202 07:45:12.672061 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:21 crc kubenswrapper[4842]: I0202 07:45:21.956568 4842 scope.go:117] "RemoveContainer" containerID="dce0962765d9bf38cd06dbb96cb12282f1586c08a47e1dfbc418a62406ef2e49" Feb 02 07:45:26 crc kubenswrapper[4842]: I0202 07:45:26.433688 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:26 crc kubenswrapper[4842]: E0202 07:45:26.436773 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:37 crc kubenswrapper[4842]: I0202 07:45:37.433492 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:37 crc kubenswrapper[4842]: E0202 07:45:37.434672 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:48 crc kubenswrapper[4842]: I0202 07:45:48.434244 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:48 crc kubenswrapper[4842]: E0202 07:45:48.435341 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:45:59 crc kubenswrapper[4842]: I0202 07:45:59.433874 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:45:59 crc kubenswrapper[4842]: E0202 07:45:59.434623 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:12 crc kubenswrapper[4842]: I0202 07:46:12.433874 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:12 crc kubenswrapper[4842]: E0202 07:46:12.434928 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:26 crc kubenswrapper[4842]: I0202 07:46:26.433688 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:26 crc kubenswrapper[4842]: E0202 07:46:26.434487 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:41 crc kubenswrapper[4842]: I0202 07:46:41.434198 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:41 crc kubenswrapper[4842]: E0202 07:46:41.435939 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:46:55 crc kubenswrapper[4842]: I0202 07:46:55.441682 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:46:55 crc kubenswrapper[4842]: E0202 07:46:55.442873 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:08 crc kubenswrapper[4842]: I0202 07:47:08.434341 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:08 crc kubenswrapper[4842]: E0202 07:47:08.435535 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:23 crc kubenswrapper[4842]: I0202 07:47:23.435205 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:23 crc kubenswrapper[4842]: E0202 07:47:23.436135 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:37 crc kubenswrapper[4842]: I0202 07:47:37.433922 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:37 crc kubenswrapper[4842]: E0202 07:47:37.435036 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:47:50 crc kubenswrapper[4842]: I0202 07:47:50.434668 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:47:50 crc kubenswrapper[4842]: E0202 07:47:50.435855 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:02 crc kubenswrapper[4842]: I0202 07:48:02.434181 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:02 crc kubenswrapper[4842]: E0202 07:48:02.435263 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:14 crc kubenswrapper[4842]: I0202 07:48:14.433421 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:14 crc kubenswrapper[4842]: E0202 07:48:14.434133 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:27 crc kubenswrapper[4842]: I0202 07:48:27.434138 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:27 crc kubenswrapper[4842]: E0202 07:48:27.435358 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.084345 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:31 crc kubenswrapper[4842]: E0202 07:48:31.085459 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerName="collect-profiles" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.085492 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerName="collect-profiles" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.085828 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" containerName="collect-profiles" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.088327 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.098579 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.141080 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-catalog-content\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.141427 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbwt\" (UniqueName: \"kubernetes.io/projected/940dd57b-92a3-4e95-b3b4-5df0efe013b1-kube-api-access-lwbwt\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.141584 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-utilities\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.242866 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-catalog-content\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243177 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbwt\" (UniqueName: \"kubernetes.io/projected/940dd57b-92a3-4e95-b3b4-5df0efe013b1-kube-api-access-lwbwt\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243318 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-utilities\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243441 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-catalog-content\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.243833 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/940dd57b-92a3-4e95-b3b4-5df0efe013b1-utilities\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.263368 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbwt\" (UniqueName: \"kubernetes.io/projected/940dd57b-92a3-4e95-b3b4-5df0efe013b1-kube-api-access-lwbwt\") pod \"certified-operators-7hzjr\" (UID: \"940dd57b-92a3-4e95-b3b4-5df0efe013b1\") " pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.470869 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:31 crc kubenswrapper[4842]: I0202 07:48:31.925182 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:32 crc kubenswrapper[4842]: I0202 07:48:32.647418 4842 generic.go:334] "Generic (PLEG): container finished" podID="940dd57b-92a3-4e95-b3b4-5df0efe013b1" containerID="b75725e4c50215f0635909d4cdaa29f7f6dcb1530244ea888272ca94fe49ea4b" exitCode=0 Feb 02 07:48:32 crc kubenswrapper[4842]: I0202 07:48:32.647479 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerDied","Data":"b75725e4c50215f0635909d4cdaa29f7f6dcb1530244ea888272ca94fe49ea4b"} Feb 02 07:48:32 crc kubenswrapper[4842]: I0202 07:48:32.647519 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerStarted","Data":"3d60c79a8911f95c3847b82d02ccf6ea42ed7ecae12f4e541bb7f8bc932c2f28"} Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.464280 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.466159 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.473788 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.599152 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.599209 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.599395 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701159 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701205 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701276 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.701805 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.702136 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.721319 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"redhat-operators-279f8\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:34 crc kubenswrapper[4842]: I0202 07:48:34.795045 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.039315 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:35 crc kubenswrapper[4842]: W0202 07:48:35.049595 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b86c40_ec89_476e_b4ef_c589af5cfd51.slice/crio-1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96 WatchSource:0}: Error finding container 1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96: Status 404 returned error can't find the container with id 1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96 Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.671102 4842 generic.go:334] "Generic (PLEG): container finished" podID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" exitCode=0 Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.671226 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2"} Feb 02 07:48:35 crc kubenswrapper[4842]: I0202 07:48:35.672366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerStarted","Data":"1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96"} Feb 02 07:48:37 crc kubenswrapper[4842]: I0202 07:48:37.691734 4842 generic.go:334] "Generic (PLEG): container finished" podID="940dd57b-92a3-4e95-b3b4-5df0efe013b1" containerID="f0a4a91c57e0912a079986a777057c27130537abf36090ea266336979a3fa017" exitCode=0 Feb 02 07:48:37 crc kubenswrapper[4842]: I0202 07:48:37.691999 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerDied","Data":"f0a4a91c57e0912a079986a777057c27130537abf36090ea266336979a3fa017"} Feb 02 07:48:37 crc kubenswrapper[4842]: I0202 07:48:37.699285 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerStarted","Data":"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490"} Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.433966 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:38 crc kubenswrapper[4842]: E0202 07:48:38.434406 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.710305 4842 generic.go:334] "Generic (PLEG): container finished" podID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" exitCode=0 Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.710444 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490"} Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.713997 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7hzjr" event={"ID":"940dd57b-92a3-4e95-b3b4-5df0efe013b1","Type":"ContainerStarted","Data":"176976b5333699da049b24ff21866f294c1ef9f0c8416775fd13db72d7127058"} Feb 02 07:48:38 crc kubenswrapper[4842]: I0202 07:48:38.792444 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7hzjr" podStartSLOduration=2.31381694 podStartE2EDuration="7.792416044s" podCreationTimestamp="2026-02-02 07:48:31 +0000 UTC" firstStartedPulling="2026-02-02 07:48:32.65014401 +0000 UTC m=+3738.027411952" lastFinishedPulling="2026-02-02 07:48:38.128743134 +0000 UTC m=+3743.506011056" observedRunningTime="2026-02-02 07:48:38.779004821 +0000 UTC m=+3744.156272773" watchObservedRunningTime="2026-02-02 07:48:38.792416044 +0000 UTC m=+3744.169683986" Feb 02 07:48:39 crc kubenswrapper[4842]: I0202 07:48:39.725786 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerStarted","Data":"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac"} Feb 02 07:48:39 crc kubenswrapper[4842]: I0202 07:48:39.756540 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-279f8" podStartSLOduration=2.322324935 podStartE2EDuration="5.756507845s" podCreationTimestamp="2026-02-02 07:48:34 +0000 UTC" firstStartedPulling="2026-02-02 07:48:35.673276097 +0000 UTC m=+3741.050544009" lastFinishedPulling="2026-02-02 07:48:39.107459007 +0000 UTC m=+3744.484726919" observedRunningTime="2026-02-02 07:48:39.750629069 +0000 UTC m=+3745.127896991" watchObservedRunningTime="2026-02-02 07:48:39.756507845 +0000 UTC m=+3745.133775787" Feb 02 07:48:41 crc kubenswrapper[4842]: I0202 07:48:41.471845 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:41 crc kubenswrapper[4842]: I0202 07:48:41.471890 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:41 crc kubenswrapper[4842]: I0202 07:48:41.525337 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:44 crc kubenswrapper[4842]: I0202 07:48:44.795191 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:44 crc kubenswrapper[4842]: I0202 07:48:44.795632 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:45 crc kubenswrapper[4842]: I0202 07:48:45.866178 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-279f8" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" probeResult="failure" output=< Feb 02 07:48:45 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:48:45 crc kubenswrapper[4842]: > Feb 02 07:48:49 crc kubenswrapper[4842]: I0202 07:48:49.434452 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:48:49 crc kubenswrapper[4842]: E0202 07:48:49.435503 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.523557 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7hzjr" Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.605613 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7hzjr"] Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.643782 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.644033 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cbwzh" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" containerID="cri-o://e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968" gracePeriod=2 Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.841546 4842 generic.go:334] "Generic (PLEG): container finished" podID="9969706e-304c-490a-b15d-7d0bfc99261c" containerID="e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968" exitCode=0 Feb 02 07:48:51 crc kubenswrapper[4842]: I0202 07:48:51.842292 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968"} Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.040725 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.087567 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") pod \"9969706e-304c-490a-b15d-7d0bfc99261c\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.087686 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") pod \"9969706e-304c-490a-b15d-7d0bfc99261c\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.087780 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") pod \"9969706e-304c-490a-b15d-7d0bfc99261c\" (UID: \"9969706e-304c-490a-b15d-7d0bfc99261c\") " Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.088511 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities" (OuterVolumeSpecName: "utilities") pod "9969706e-304c-490a-b15d-7d0bfc99261c" (UID: "9969706e-304c-490a-b15d-7d0bfc99261c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.093317 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr" (OuterVolumeSpecName: "kube-api-access-tvsxr") pod "9969706e-304c-490a-b15d-7d0bfc99261c" (UID: "9969706e-304c-490a-b15d-7d0bfc99261c"). InnerVolumeSpecName "kube-api-access-tvsxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.157027 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9969706e-304c-490a-b15d-7d0bfc99261c" (UID: "9969706e-304c-490a-b15d-7d0bfc99261c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.189472 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvsxr\" (UniqueName: \"kubernetes.io/projected/9969706e-304c-490a-b15d-7d0bfc99261c-kube-api-access-tvsxr\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.189503 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.189516 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9969706e-304c-490a-b15d-7d0bfc99261c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.850769 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cbwzh" event={"ID":"9969706e-304c-490a-b15d-7d0bfc99261c","Type":"ContainerDied","Data":"87da024578fe003edad40db056fe8ec4f30280deba8415eb825b3aeb82ca3997"} Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.850833 4842 scope.go:117] "RemoveContainer" containerID="e64acd0481969dd97f8f6ecb1ab6976f73e44f1ae7f1c189557824f80b337968" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.850832 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cbwzh" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.875761 4842 scope.go:117] "RemoveContainer" containerID="308b61160ba5e467d88f1ac70bd85a0adb7d7b33d6c1eb5a0233036f6970dc7b" Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.889466 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.894766 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cbwzh"] Feb 02 07:48:52 crc kubenswrapper[4842]: I0202 07:48:52.895956 4842 scope.go:117] "RemoveContainer" containerID="cdc5b57eaa471b1df4736cdcd50fb5f9ddf54fbd99f33734d0e692fc9f77a97f" Feb 02 07:48:53 crc kubenswrapper[4842]: I0202 07:48:53.448632 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" path="/var/lib/kubelet/pods/9969706e-304c-490a-b15d-7d0bfc99261c/volumes" Feb 02 07:48:54 crc kubenswrapper[4842]: I0202 07:48:54.874729 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:54 crc kubenswrapper[4842]: I0202 07:48:54.947410 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:55 crc kubenswrapper[4842]: I0202 07:48:55.967102 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:56 crc kubenswrapper[4842]: I0202 07:48:56.888578 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-279f8" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" containerID="cri-o://6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" gracePeriod=2 Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.284472 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.385669 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") pod \"71b86c40-ec89-476e-b4ef-c589af5cfd51\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.386004 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") pod \"71b86c40-ec89-476e-b4ef-c589af5cfd51\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.386098 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") pod \"71b86c40-ec89-476e-b4ef-c589af5cfd51\" (UID: \"71b86c40-ec89-476e-b4ef-c589af5cfd51\") " Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.388345 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities" (OuterVolumeSpecName: "utilities") pod "71b86c40-ec89-476e-b4ef-c589af5cfd51" (UID: "71b86c40-ec89-476e-b4ef-c589af5cfd51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.401973 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s" (OuterVolumeSpecName: "kube-api-access-xgz4s") pod "71b86c40-ec89-476e-b4ef-c589af5cfd51" (UID: "71b86c40-ec89-476e-b4ef-c589af5cfd51"). InnerVolumeSpecName "kube-api-access-xgz4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.487840 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.487886 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgz4s\" (UniqueName: \"kubernetes.io/projected/71b86c40-ec89-476e-b4ef-c589af5cfd51-kube-api-access-xgz4s\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.536107 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71b86c40-ec89-476e-b4ef-c589af5cfd51" (UID: "71b86c40-ec89-476e-b4ef-c589af5cfd51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.590306 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71b86c40-ec89-476e-b4ef-c589af5cfd51-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.902972 4842 generic.go:334] "Generic (PLEG): container finished" podID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" exitCode=0 Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac"} Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903054 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-279f8" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903084 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-279f8" event={"ID":"71b86c40-ec89-476e-b4ef-c589af5cfd51","Type":"ContainerDied","Data":"1ea6438bcf564fdba3229da5d634c0316c7af7e2b4ca957ebfdb03c355e56e96"} Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.903116 4842 scope.go:117] "RemoveContainer" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.931497 4842 scope.go:117] "RemoveContainer" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.962934 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.976960 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-279f8"] Feb 02 07:48:57 crc kubenswrapper[4842]: I0202 07:48:57.988755 4842 scope.go:117] "RemoveContainer" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.026130 4842 scope.go:117] "RemoveContainer" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" Feb 02 07:48:58 crc kubenswrapper[4842]: E0202 07:48:58.026753 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac\": container with ID starting with 6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac not found: ID does not exist" containerID="6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.026804 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac"} err="failed to get container status \"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac\": rpc error: code = NotFound desc = could not find container \"6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac\": container with ID starting with 6ff94907d807a7db02aa2e58925b5604aae68a69af14643e87b3e18b54a027ac not found: ID does not exist" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.026825 4842 scope.go:117] "RemoveContainer" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" Feb 02 07:48:58 crc kubenswrapper[4842]: E0202 07:48:58.027235 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490\": container with ID starting with 4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490 not found: ID does not exist" containerID="4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.027258 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490"} err="failed to get container status \"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490\": rpc error: code = NotFound desc = could not find container \"4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490\": container with ID starting with 4f536cbb24c94518e58fe5cab2f2d67610e926117a2fdbd989f3e67907fb7490 not found: ID does not exist" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.027271 4842 scope.go:117] "RemoveContainer" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" Feb 02 07:48:58 crc kubenswrapper[4842]: E0202 07:48:58.027831 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2\": container with ID starting with fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2 not found: ID does not exist" containerID="fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2" Feb 02 07:48:58 crc kubenswrapper[4842]: I0202 07:48:58.027884 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2"} err="failed to get container status \"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2\": rpc error: code = NotFound desc = could not find container \"fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2\": container with ID starting with fa37a0a036b25e8309cefa8f2c531f3df4eb62c16702b62762d98367686100e2 not found: ID does not exist" Feb 02 07:48:59 crc kubenswrapper[4842]: I0202 07:48:59.450066 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" path="/var/lib/kubelet/pods/71b86c40-ec89-476e-b4ef-c589af5cfd51/volumes" Feb 02 07:49:04 crc kubenswrapper[4842]: I0202 07:49:04.434948 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:04 crc kubenswrapper[4842]: E0202 07:49:04.437162 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:16 crc kubenswrapper[4842]: I0202 07:49:16.434371 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:16 crc kubenswrapper[4842]: E0202 07:49:16.435826 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:27 crc kubenswrapper[4842]: I0202 07:49:27.433501 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:27 crc kubenswrapper[4842]: E0202 07:49:27.434922 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:41 crc kubenswrapper[4842]: I0202 07:49:41.435303 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:41 crc kubenswrapper[4842]: E0202 07:49:41.436999 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:49:55 crc kubenswrapper[4842]: I0202 07:49:55.441770 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:49:55 crc kubenswrapper[4842]: E0202 07:49:55.442875 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:50:09 crc kubenswrapper[4842]: I0202 07:50:09.433741 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:50:09 crc kubenswrapper[4842]: E0202 07:50:09.434801 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:50:23 crc kubenswrapper[4842]: I0202 07:50:23.434207 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:50:23 crc kubenswrapper[4842]: I0202 07:50:23.892550 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2"} Feb 02 07:52:42 crc kubenswrapper[4842]: I0202 07:52:42.146095 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:52:42 crc kubenswrapper[4842]: I0202 07:52:42.146709 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:53:12 crc kubenswrapper[4842]: I0202 07:53:12.146429 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:53:12 crc kubenswrapper[4842]: I0202 07:53:12.146999 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.145894 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.146573 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.146653 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.147598 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.147731 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2" gracePeriod=600 Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.281197 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2" exitCode=0 Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.281273 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2"} Feb 02 07:53:42 crc kubenswrapper[4842]: I0202 07:53:42.281705 4842 scope.go:117] "RemoveContainer" containerID="61f5faa247be5f8a2ed4f9a1396c6b9e8d145273c14714e2008cb43de509cd9a" Feb 02 07:53:43 crc kubenswrapper[4842]: I0202 07:53:43.297054 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90"} Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.139968 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140808 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140824 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140837 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140845 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-content" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140867 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140875 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140890 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140897 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140906 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140913 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="extract-utilities" Feb 02 07:55:37 crc kubenswrapper[4842]: E0202 07:55:37.140931 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.140940 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.141100 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="9969706e-304c-490a-b15d-7d0bfc99261c" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.141122 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="71b86c40-ec89-476e-b4ef-c589af5cfd51" containerName="registry-server" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.142375 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.159050 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.229936 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.230022 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.230148 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332150 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332375 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332520 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.332767 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.333370 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.376558 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"redhat-marketplace-vwxbs\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:37 crc kubenswrapper[4842]: I0202 07:55:37.471561 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:37.998335 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.654781 4842 generic.go:334] "Generic (PLEG): container finished" podID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" exitCode=0 Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.654911 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154"} Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.655515 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerStarted","Data":"4c94cb065d9d3c1220fea3b4b684400a17590e22953a4eb305c511b4386ed940"} Feb 02 07:55:38 crc kubenswrapper[4842]: I0202 07:55:38.659210 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 07:55:40 crc kubenswrapper[4842]: I0202 07:55:40.682181 4842 generic.go:334] "Generic (PLEG): container finished" podID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" exitCode=0 Feb 02 07:55:40 crc kubenswrapper[4842]: I0202 07:55:40.682405 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb"} Feb 02 07:55:41 crc kubenswrapper[4842]: I0202 07:55:41.697144 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerStarted","Data":"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768"} Feb 02 07:55:41 crc kubenswrapper[4842]: I0202 07:55:41.735570 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwxbs" podStartSLOduration=2.259022431 podStartE2EDuration="4.735195522s" podCreationTimestamp="2026-02-02 07:55:37 +0000 UTC" firstStartedPulling="2026-02-02 07:55:38.658778784 +0000 UTC m=+4164.036046726" lastFinishedPulling="2026-02-02 07:55:41.134951895 +0000 UTC m=+4166.512219817" observedRunningTime="2026-02-02 07:55:41.730730241 +0000 UTC m=+4167.107998173" watchObservedRunningTime="2026-02-02 07:55:41.735195522 +0000 UTC m=+4167.112463474" Feb 02 07:55:42 crc kubenswrapper[4842]: I0202 07:55:42.146834 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:55:42 crc kubenswrapper[4842]: I0202 07:55:42.147506 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.476500 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.477265 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.544559 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.872532 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:47 crc kubenswrapper[4842]: I0202 07:55:47.938955 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:49 crc kubenswrapper[4842]: I0202 07:55:49.773338 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwxbs" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" containerID="cri-o://5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" gracePeriod=2 Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.257515 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.435461 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") pod \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.435536 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") pod \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.435670 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") pod \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\" (UID: \"f8256e28-ef80-4c77-87cf-5c5fa552a61a\") " Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.437442 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities" (OuterVolumeSpecName: "utilities") pod "f8256e28-ef80-4c77-87cf-5c5fa552a61a" (UID: "f8256e28-ef80-4c77-87cf-5c5fa552a61a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.444254 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw" (OuterVolumeSpecName: "kube-api-access-wxpdw") pod "f8256e28-ef80-4c77-87cf-5c5fa552a61a" (UID: "f8256e28-ef80-4c77-87cf-5c5fa552a61a"). InnerVolumeSpecName "kube-api-access-wxpdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.477288 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8256e28-ef80-4c77-87cf-5c5fa552a61a" (UID: "f8256e28-ef80-4c77-87cf-5c5fa552a61a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.537515 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxpdw\" (UniqueName: \"kubernetes.io/projected/f8256e28-ef80-4c77-87cf-5c5fa552a61a-kube-api-access-wxpdw\") on node \"crc\" DevicePath \"\"" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.537583 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.537613 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8256e28-ef80-4c77-87cf-5c5fa552a61a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785763 4842 generic.go:334] "Generic (PLEG): container finished" podID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" exitCode=0 Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785822 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768"} Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785873 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwxbs" event={"ID":"f8256e28-ef80-4c77-87cf-5c5fa552a61a","Type":"ContainerDied","Data":"4c94cb065d9d3c1220fea3b4b684400a17590e22953a4eb305c511b4386ed940"} Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.785908 4842 scope.go:117] "RemoveContainer" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.789103 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwxbs" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.810863 4842 scope.go:117] "RemoveContainer" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.843768 4842 scope.go:117] "RemoveContainer" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.852152 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.864676 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwxbs"] Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.895449 4842 scope.go:117] "RemoveContainer" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" Feb 02 07:55:50 crc kubenswrapper[4842]: E0202 07:55:50.896309 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768\": container with ID starting with 5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768 not found: ID does not exist" containerID="5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.896378 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768"} err="failed to get container status \"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768\": rpc error: code = NotFound desc = could not find container \"5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768\": container with ID starting with 5f1da0ede596c76fb9ae9da153f4f8f6264903174eaebaebe6c81d97ed766768 not found: ID does not exist" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.896423 4842 scope.go:117] "RemoveContainer" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" Feb 02 07:55:50 crc kubenswrapper[4842]: E0202 07:55:50.897551 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb\": container with ID starting with d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb not found: ID does not exist" containerID="d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.897594 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb"} err="failed to get container status \"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb\": rpc error: code = NotFound desc = could not find container \"d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb\": container with ID starting with d4aeba5e68412a1ce9bda362ec1609a0710e1b0acbfd4671d89e780140beeceb not found: ID does not exist" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.897630 4842 scope.go:117] "RemoveContainer" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" Feb 02 07:55:50 crc kubenswrapper[4842]: E0202 07:55:50.898202 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154\": container with ID starting with 48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154 not found: ID does not exist" containerID="48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154" Feb 02 07:55:50 crc kubenswrapper[4842]: I0202 07:55:50.898292 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154"} err="failed to get container status \"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154\": rpc error: code = NotFound desc = could not find container \"48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154\": container with ID starting with 48e5229822fde2e70e040c5712810e66420db0bed95168227716d324623ba154 not found: ID does not exist" Feb 02 07:55:51 crc kubenswrapper[4842]: I0202 07:55:51.451786 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" path="/var/lib/kubelet/pods/f8256e28-ef80-4c77-87cf-5c5fa552a61a/volumes" Feb 02 07:56:12 crc kubenswrapper[4842]: I0202 07:56:12.147141 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:56:12 crc kubenswrapper[4842]: I0202 07:56:12.148141 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.302851 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:21 crc kubenswrapper[4842]: E0202 07:56:21.303964 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-utilities" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.303986 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-utilities" Feb 02 07:56:21 crc kubenswrapper[4842]: E0202 07:56:21.304038 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-content" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.304053 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="extract-content" Feb 02 07:56:21 crc kubenswrapper[4842]: E0202 07:56:21.304078 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.304092 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.304362 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8256e28-ef80-4c77-87cf-5c5fa552a61a" containerName="registry-server" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.306259 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.330781 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.464737 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.464870 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.465199 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.566774 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.566852 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.566912 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.567538 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.567555 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.598631 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"community-operators-5xgng\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:21 crc kubenswrapper[4842]: I0202 07:56:21.646342 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:22 crc kubenswrapper[4842]: I0202 07:56:22.122514 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:23 crc kubenswrapper[4842]: I0202 07:56:23.081855 4842 generic.go:334] "Generic (PLEG): container finished" podID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" exitCode=0 Feb 02 07:56:23 crc kubenswrapper[4842]: I0202 07:56:23.081969 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02"} Feb 02 07:56:23 crc kubenswrapper[4842]: I0202 07:56:23.082266 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerStarted","Data":"4f41e69dad5dabb175e12aad2d4453f9d986e25ee248b8effa51bf29a75e83c8"} Feb 02 07:56:24 crc kubenswrapper[4842]: I0202 07:56:24.095132 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerStarted","Data":"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a"} Feb 02 07:56:25 crc kubenswrapper[4842]: I0202 07:56:25.102924 4842 generic.go:334] "Generic (PLEG): container finished" podID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" exitCode=0 Feb 02 07:56:25 crc kubenswrapper[4842]: I0202 07:56:25.102959 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a"} Feb 02 07:56:26 crc kubenswrapper[4842]: I0202 07:56:26.117363 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerStarted","Data":"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b"} Feb 02 07:56:26 crc kubenswrapper[4842]: I0202 07:56:26.152816 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5xgng" podStartSLOduration=2.698815839 podStartE2EDuration="5.152789309s" podCreationTimestamp="2026-02-02 07:56:21 +0000 UTC" firstStartedPulling="2026-02-02 07:56:23.086907423 +0000 UTC m=+4208.464175365" lastFinishedPulling="2026-02-02 07:56:25.540880883 +0000 UTC m=+4210.918148835" observedRunningTime="2026-02-02 07:56:26.142553474 +0000 UTC m=+4211.519821396" watchObservedRunningTime="2026-02-02 07:56:26.152789309 +0000 UTC m=+4211.530057231" Feb 02 07:56:31 crc kubenswrapper[4842]: I0202 07:56:31.646956 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:31 crc kubenswrapper[4842]: I0202 07:56:31.648502 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:31 crc kubenswrapper[4842]: I0202 07:56:31.718885 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:32 crc kubenswrapper[4842]: I0202 07:56:32.249449 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:32 crc kubenswrapper[4842]: I0202 07:56:32.323350 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.192087 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5xgng" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" containerID="cri-o://131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" gracePeriod=2 Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.677857 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.783061 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") pod \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.783405 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") pod \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.783497 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") pod \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\" (UID: \"472955b5-64fa-49fb-a6d5-78e8267c9e3a\") " Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.784584 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities" (OuterVolumeSpecName: "utilities") pod "472955b5-64fa-49fb-a6d5-78e8267c9e3a" (UID: "472955b5-64fa-49fb-a6d5-78e8267c9e3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.791976 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44" (OuterVolumeSpecName: "kube-api-access-j8c44") pod "472955b5-64fa-49fb-a6d5-78e8267c9e3a" (UID: "472955b5-64fa-49fb-a6d5-78e8267c9e3a"). InnerVolumeSpecName "kube-api-access-j8c44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.865502 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "472955b5-64fa-49fb-a6d5-78e8267c9e3a" (UID: "472955b5-64fa-49fb-a6d5-78e8267c9e3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.884781 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.884812 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/472955b5-64fa-49fb-a6d5-78e8267c9e3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:56:34 crc kubenswrapper[4842]: I0202 07:56:34.884829 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8c44\" (UniqueName: \"kubernetes.io/projected/472955b5-64fa-49fb-a6d5-78e8267c9e3a-kube-api-access-j8c44\") on node \"crc\" DevicePath \"\"" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207500 4842 generic.go:334] "Generic (PLEG): container finished" podID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" exitCode=0 Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207622 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5xgng" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207614 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b"} Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207866 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5xgng" event={"ID":"472955b5-64fa-49fb-a6d5-78e8267c9e3a","Type":"ContainerDied","Data":"4f41e69dad5dabb175e12aad2d4453f9d986e25ee248b8effa51bf29a75e83c8"} Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.207935 4842 scope.go:117] "RemoveContainer" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.253586 4842 scope.go:117] "RemoveContainer" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.287758 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.301769 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5xgng"] Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.316609 4842 scope.go:117] "RemoveContainer" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.345776 4842 scope.go:117] "RemoveContainer" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" Feb 02 07:56:35 crc kubenswrapper[4842]: E0202 07:56:35.346371 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b\": container with ID starting with 131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b not found: ID does not exist" containerID="131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346405 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b"} err="failed to get container status \"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b\": rpc error: code = NotFound desc = could not find container \"131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b\": container with ID starting with 131ad1e8b08629c9d730b42697cc5cb98b699c2dadd8669eccc92eca8b9b2d1b not found: ID does not exist" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346430 4842 scope.go:117] "RemoveContainer" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" Feb 02 07:56:35 crc kubenswrapper[4842]: E0202 07:56:35.346745 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a\": container with ID starting with 43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a not found: ID does not exist" containerID="43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346792 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a"} err="failed to get container status \"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a\": rpc error: code = NotFound desc = could not find container \"43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a\": container with ID starting with 43a72ad3ce2fb89821ff3c0f385176e873966cc0f12a09722b0c6267fe77041a not found: ID does not exist" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.346821 4842 scope.go:117] "RemoveContainer" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" Feb 02 07:56:35 crc kubenswrapper[4842]: E0202 07:56:35.347142 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02\": container with ID starting with 9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02 not found: ID does not exist" containerID="9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.347169 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02"} err="failed to get container status \"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02\": rpc error: code = NotFound desc = could not find container \"9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02\": container with ID starting with 9d9bc56eb8d028342795ef504dd3c40f973bc552d00aee75cedfa9d843eaaf02 not found: ID does not exist" Feb 02 07:56:35 crc kubenswrapper[4842]: I0202 07:56:35.441743 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" path="/var/lib/kubelet/pods/472955b5-64fa-49fb-a6d5-78e8267c9e3a/volumes" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.146468 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.147295 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.147370 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.148207 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.148338 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" gracePeriod=600 Feb 02 07:56:42 crc kubenswrapper[4842]: E0202 07:56:42.280302 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.282380 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" exitCode=0 Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.282497 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90"} Feb 02 07:56:42 crc kubenswrapper[4842]: I0202 07:56:42.282556 4842 scope.go:117] "RemoveContainer" containerID="2a6b1b10d828e24dab9ac38a1a9d09d8e3ce721fcbac4b2dc553e7b889f1a4f2" Feb 02 07:56:43 crc kubenswrapper[4842]: I0202 07:56:43.297927 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:56:43 crc kubenswrapper[4842]: E0202 07:56:43.298400 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:56:54 crc kubenswrapper[4842]: I0202 07:56:54.433276 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:56:54 crc kubenswrapper[4842]: E0202 07:56:54.434895 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:05 crc kubenswrapper[4842]: I0202 07:57:05.448144 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:05 crc kubenswrapper[4842]: E0202 07:57:05.452475 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:16 crc kubenswrapper[4842]: I0202 07:57:16.435213 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:16 crc kubenswrapper[4842]: E0202 07:57:16.436328 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:27 crc kubenswrapper[4842]: I0202 07:57:27.437156 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:27 crc kubenswrapper[4842]: E0202 07:57:27.438864 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:38 crc kubenswrapper[4842]: I0202 07:57:38.433922 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:38 crc kubenswrapper[4842]: E0202 07:57:38.434904 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:57:52 crc kubenswrapper[4842]: I0202 07:57:52.435294 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:57:52 crc kubenswrapper[4842]: E0202 07:57:52.437346 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:06 crc kubenswrapper[4842]: I0202 07:58:06.434948 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:06 crc kubenswrapper[4842]: E0202 07:58:06.436248 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:19 crc kubenswrapper[4842]: I0202 07:58:19.433825 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:19 crc kubenswrapper[4842]: E0202 07:58:19.434609 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:30 crc kubenswrapper[4842]: I0202 07:58:30.434021 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:30 crc kubenswrapper[4842]: E0202 07:58:30.435112 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:43 crc kubenswrapper[4842]: I0202 07:58:43.434443 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:43 crc kubenswrapper[4842]: E0202 07:58:43.435723 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:58:54 crc kubenswrapper[4842]: I0202 07:58:54.433895 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:58:54 crc kubenswrapper[4842]: E0202 07:58:54.434538 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:05 crc kubenswrapper[4842]: I0202 07:59:05.441927 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:05 crc kubenswrapper[4842]: E0202 07:59:05.442929 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881265 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:06 crc kubenswrapper[4842]: E0202 07:59:06.881844 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-utilities" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881868 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-utilities" Feb 02 07:59:06 crc kubenswrapper[4842]: E0202 07:59:06.881895 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-content" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881907 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="extract-content" Feb 02 07:59:06 crc kubenswrapper[4842]: E0202 07:59:06.881964 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.881978 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.882249 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="472955b5-64fa-49fb-a6d5-78e8267c9e3a" containerName="registry-server" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.883998 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.895470 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.979077 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.979550 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:06 crc kubenswrapper[4842]: I0202 07:59:06.979587 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.080337 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.080386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.080453 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.081028 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.081125 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.113295 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"redhat-operators-qhrxp\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.224429 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:07 crc kubenswrapper[4842]: I0202 07:59:07.689645 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:07 crc kubenswrapper[4842]: W0202 07:59:07.695954 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a0a396e_5aac_478f_82d3_a3f9dff03f2d.slice/crio-ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7 WatchSource:0}: Error finding container ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7: Status 404 returned error can't find the container with id ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7 Feb 02 07:59:08 crc kubenswrapper[4842]: I0202 07:59:08.701863 4842 generic.go:334] "Generic (PLEG): container finished" podID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" exitCode=0 Feb 02 07:59:08 crc kubenswrapper[4842]: I0202 07:59:08.701938 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656"} Feb 02 07:59:08 crc kubenswrapper[4842]: I0202 07:59:08.702366 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerStarted","Data":"ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7"} Feb 02 07:59:10 crc kubenswrapper[4842]: I0202 07:59:10.716268 4842 generic.go:334] "Generic (PLEG): container finished" podID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" exitCode=0 Feb 02 07:59:10 crc kubenswrapper[4842]: I0202 07:59:10.716340 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144"} Feb 02 07:59:11 crc kubenswrapper[4842]: I0202 07:59:11.727351 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerStarted","Data":"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151"} Feb 02 07:59:11 crc kubenswrapper[4842]: I0202 07:59:11.761864 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhrxp" podStartSLOduration=3.348451666 podStartE2EDuration="5.761838658s" podCreationTimestamp="2026-02-02 07:59:06 +0000 UTC" firstStartedPulling="2026-02-02 07:59:08.705542471 +0000 UTC m=+4374.082810383" lastFinishedPulling="2026-02-02 07:59:11.118929443 +0000 UTC m=+4376.496197375" observedRunningTime="2026-02-02 07:59:11.752210119 +0000 UTC m=+4377.129478081" watchObservedRunningTime="2026-02-02 07:59:11.761838658 +0000 UTC m=+4377.139106610" Feb 02 07:59:17 crc kubenswrapper[4842]: I0202 07:59:17.225181 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:17 crc kubenswrapper[4842]: I0202 07:59:17.225680 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:18 crc kubenswrapper[4842]: I0202 07:59:18.287662 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhrxp" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" probeResult="failure" output=< Feb 02 07:59:18 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 07:59:18 crc kubenswrapper[4842]: > Feb 02 07:59:18 crc kubenswrapper[4842]: I0202 07:59:18.433279 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:18 crc kubenswrapper[4842]: E0202 07:59:18.433638 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:27 crc kubenswrapper[4842]: I0202 07:59:27.304991 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:27 crc kubenswrapper[4842]: I0202 07:59:27.392046 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:27 crc kubenswrapper[4842]: I0202 07:59:27.557127 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:28 crc kubenswrapper[4842]: I0202 07:59:28.908540 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qhrxp" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" containerID="cri-o://784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" gracePeriod=2 Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.361587 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.542999 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") pod \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.543132 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") pod \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.543194 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") pod \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\" (UID: \"4a0a396e-5aac-478f-82d3-a3f9dff03f2d\") " Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.545054 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities" (OuterVolumeSpecName: "utilities") pod "4a0a396e-5aac-478f-82d3-a3f9dff03f2d" (UID: "4a0a396e-5aac-478f-82d3-a3f9dff03f2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.547625 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.553527 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc" (OuterVolumeSpecName: "kube-api-access-p9qsc") pod "4a0a396e-5aac-478f-82d3-a3f9dff03f2d" (UID: "4a0a396e-5aac-478f-82d3-a3f9dff03f2d"). InnerVolumeSpecName "kube-api-access-p9qsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.649278 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9qsc\" (UniqueName: \"kubernetes.io/projected/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-kube-api-access-p9qsc\") on node \"crc\" DevicePath \"\"" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.705630 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a0a396e-5aac-478f-82d3-a3f9dff03f2d" (UID: "4a0a396e-5aac-478f-82d3-a3f9dff03f2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.751912 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a0a396e-5aac-478f-82d3-a3f9dff03f2d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923550 4842 generic.go:334] "Generic (PLEG): container finished" podID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" exitCode=0 Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923627 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151"} Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923682 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhrxp" event={"ID":"4a0a396e-5aac-478f-82d3-a3f9dff03f2d","Type":"ContainerDied","Data":"ecab244dff79ae4fb4e99c1c45c64404f5155c3d398783d47d57c131a2e686e7"} Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923701 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhrxp" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.923714 4842 scope.go:117] "RemoveContainer" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.956601 4842 scope.go:117] "RemoveContainer" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" Feb 02 07:59:29 crc kubenswrapper[4842]: I0202 07:59:29.984389 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.011672 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qhrxp"] Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.014252 4842 scope.go:117] "RemoveContainer" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.036521 4842 scope.go:117] "RemoveContainer" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" Feb 02 07:59:30 crc kubenswrapper[4842]: E0202 07:59:30.037010 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151\": container with ID starting with 784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151 not found: ID does not exist" containerID="784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037050 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151"} err="failed to get container status \"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151\": rpc error: code = NotFound desc = could not find container \"784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151\": container with ID starting with 784ad1a9d3f4b81827879e1659da3b69e8d95b9a80161a4689fb75887ebd6151 not found: ID does not exist" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037077 4842 scope.go:117] "RemoveContainer" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" Feb 02 07:59:30 crc kubenswrapper[4842]: E0202 07:59:30.037442 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144\": container with ID starting with db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144 not found: ID does not exist" containerID="db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037471 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144"} err="failed to get container status \"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144\": rpc error: code = NotFound desc = could not find container \"db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144\": container with ID starting with db7855b3fddda03f38165b244118f000b7127f27af3dee06fc546cdc6c226144 not found: ID does not exist" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.037489 4842 scope.go:117] "RemoveContainer" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" Feb 02 07:59:30 crc kubenswrapper[4842]: E0202 07:59:30.038123 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656\": container with ID starting with f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656 not found: ID does not exist" containerID="f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656" Feb 02 07:59:30 crc kubenswrapper[4842]: I0202 07:59:30.038170 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656"} err="failed to get container status \"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656\": rpc error: code = NotFound desc = could not find container \"f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656\": container with ID starting with f072226553d44d7b8a4d8f2e1588dfe5e98540b6f970544e5093aa31ff9d7656 not found: ID does not exist" Feb 02 07:59:31 crc kubenswrapper[4842]: I0202 07:59:31.434166 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:31 crc kubenswrapper[4842]: E0202 07:59:31.435086 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 07:59:31 crc kubenswrapper[4842]: I0202 07:59:31.449055 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" path="/var/lib/kubelet/pods/4a0a396e-5aac-478f-82d3-a3f9dff03f2d/volumes" Feb 02 07:59:45 crc kubenswrapper[4842]: I0202 07:59:45.441409 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 07:59:45 crc kubenswrapper[4842]: E0202 07:59:45.442362 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.208909 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl"] Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.209737 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-utilities" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209750 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-utilities" Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.209778 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-content" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209784 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="extract-content" Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.209805 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209812 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.209944 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0a396e-5aac-478f-82d3-a3f9dff03f2d" containerName="registry-server" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.210381 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.211980 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.212058 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.223204 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl"] Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.390827 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.390890 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.391046 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.433659 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:00 crc kubenswrapper[4842]: E0202 08:00:00.433927 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.492479 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.492782 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.493038 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.493951 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.503498 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.515783 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"collect-profiles-29500320-mjgfl\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:00 crc kubenswrapper[4842]: I0202 08:00:00.560792 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:01 crc kubenswrapper[4842]: I0202 08:00:01.012013 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl"] Feb 02 08:00:01 crc kubenswrapper[4842]: I0202 08:00:01.223687 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" event={"ID":"e01be79b-cbb5-4540-9a1c-5d0891ed6399","Type":"ContainerStarted","Data":"40aebe991f98f0098755eeb06cace74a285f48cd45fe5b2462d0a0f5a305f461"} Feb 02 08:00:02 crc kubenswrapper[4842]: I0202 08:00:02.234031 4842 generic.go:334] "Generic (PLEG): container finished" podID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerID="53b8081d7a60c7c28d76b97b32f3fec298777e492876306e229be304ee7a402a" exitCode=0 Feb 02 08:00:02 crc kubenswrapper[4842]: I0202 08:00:02.234164 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" event={"ID":"e01be79b-cbb5-4540-9a1c-5d0891ed6399","Type":"ContainerDied","Data":"53b8081d7a60c7c28d76b97b32f3fec298777e492876306e229be304ee7a402a"} Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.586439 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.736510 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") pod \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.736574 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") pod \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.736659 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") pod \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\" (UID: \"e01be79b-cbb5-4540-9a1c-5d0891ed6399\") " Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.738574 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume" (OuterVolumeSpecName: "config-volume") pod "e01be79b-cbb5-4540-9a1c-5d0891ed6399" (UID: "e01be79b-cbb5-4540-9a1c-5d0891ed6399"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.743834 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e01be79b-cbb5-4540-9a1c-5d0891ed6399" (UID: "e01be79b-cbb5-4540-9a1c-5d0891ed6399"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.760981 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7" (OuterVolumeSpecName: "kube-api-access-9gtq7") pod "e01be79b-cbb5-4540-9a1c-5d0891ed6399" (UID: "e01be79b-cbb5-4540-9a1c-5d0891ed6399"). InnerVolumeSpecName "kube-api-access-9gtq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.838475 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e01be79b-cbb5-4540-9a1c-5d0891ed6399-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.838511 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e01be79b-cbb5-4540-9a1c-5d0891ed6399-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:00:03 crc kubenswrapper[4842]: I0202 08:00:03.838522 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gtq7\" (UniqueName: \"kubernetes.io/projected/e01be79b-cbb5-4540-9a1c-5d0891ed6399-kube-api-access-9gtq7\") on node \"crc\" DevicePath \"\"" Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.254324 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" event={"ID":"e01be79b-cbb5-4540-9a1c-5d0891ed6399","Type":"ContainerDied","Data":"40aebe991f98f0098755eeb06cace74a285f48cd45fe5b2462d0a0f5a305f461"} Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.254371 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40aebe991f98f0098755eeb06cace74a285f48cd45fe5b2462d0a0f5a305f461" Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.254513 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500320-mjgfl" Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.678012 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 08:00:04 crc kubenswrapper[4842]: I0202 08:00:04.692572 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500275-ts5jb"] Feb 02 08:00:05 crc kubenswrapper[4842]: I0202 08:00:05.462570 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94334935-cf80-444c-b508-8c45e9780eee" path="/var/lib/kubelet/pods/94334935-cf80-444c-b508-8c45e9780eee/volumes" Feb 02 08:00:14 crc kubenswrapper[4842]: I0202 08:00:14.433790 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:14 crc kubenswrapper[4842]: E0202 08:00:14.434694 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:22 crc kubenswrapper[4842]: I0202 08:00:22.396114 4842 scope.go:117] "RemoveContainer" containerID="3ec04990d6c97adea2fe95dabf427fb8df7522b562c84dbbcac33e51d0d54b26" Feb 02 08:00:26 crc kubenswrapper[4842]: I0202 08:00:26.434106 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:26 crc kubenswrapper[4842]: E0202 08:00:26.437207 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:39 crc kubenswrapper[4842]: I0202 08:00:39.433615 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:39 crc kubenswrapper[4842]: E0202 08:00:39.434855 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:00:51 crc kubenswrapper[4842]: I0202 08:00:51.433785 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:00:51 crc kubenswrapper[4842]: E0202 08:00:51.434683 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:02 crc kubenswrapper[4842]: I0202 08:01:02.434554 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:02 crc kubenswrapper[4842]: E0202 08:01:02.435724 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:13 crc kubenswrapper[4842]: I0202 08:01:13.434085 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:13 crc kubenswrapper[4842]: E0202 08:01:13.435377 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:24 crc kubenswrapper[4842]: I0202 08:01:24.433408 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:24 crc kubenswrapper[4842]: E0202 08:01:24.434657 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:38 crc kubenswrapper[4842]: I0202 08:01:38.433474 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:38 crc kubenswrapper[4842]: E0202 08:01:38.434094 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:01:52 crc kubenswrapper[4842]: I0202 08:01:52.433623 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:01:53 crc kubenswrapper[4842]: I0202 08:01:53.297456 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0"} Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.459900 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:42 crc kubenswrapper[4842]: E0202 08:02:42.461504 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerName="collect-profiles" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.461527 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerName="collect-profiles" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.461773 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01be79b-cbb5-4540-9a1c-5d0891ed6399" containerName="collect-profiles" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.463242 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.480064 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.522903 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.522995 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.523097 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.624277 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.624721 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.624795 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.625107 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.625108 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.664501 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"certified-operators-8tgjk\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:42 crc kubenswrapper[4842]: I0202 08:02:42.838577 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.323961 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.779129 4842 generic.go:334] "Generic (PLEG): container finished" podID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" exitCode=0 Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.779361 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8"} Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.779439 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerStarted","Data":"f7a5d6b272b6c311a39a8e9ddc101fbb1653df24c1f041cb73a7bef8806bea46"} Feb 02 08:02:43 crc kubenswrapper[4842]: I0202 08:02:43.781804 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:02:44 crc kubenswrapper[4842]: I0202 08:02:44.788298 4842 generic.go:334] "Generic (PLEG): container finished" podID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" exitCode=0 Feb 02 08:02:44 crc kubenswrapper[4842]: I0202 08:02:44.788354 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215"} Feb 02 08:02:45 crc kubenswrapper[4842]: I0202 08:02:45.800376 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerStarted","Data":"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858"} Feb 02 08:02:45 crc kubenswrapper[4842]: I0202 08:02:45.830873 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8tgjk" podStartSLOduration=2.432668937 podStartE2EDuration="3.830848179s" podCreationTimestamp="2026-02-02 08:02:42 +0000 UTC" firstStartedPulling="2026-02-02 08:02:43.781528657 +0000 UTC m=+4589.158796569" lastFinishedPulling="2026-02-02 08:02:45.179707869 +0000 UTC m=+4590.556975811" observedRunningTime="2026-02-02 08:02:45.823042195 +0000 UTC m=+4591.200310137" watchObservedRunningTime="2026-02-02 08:02:45.830848179 +0000 UTC m=+4591.208116121" Feb 02 08:02:52 crc kubenswrapper[4842]: I0202 08:02:52.839726 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:52 crc kubenswrapper[4842]: I0202 08:02:52.842042 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:52 crc kubenswrapper[4842]: I0202 08:02:52.905543 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:53 crc kubenswrapper[4842]: I0202 08:02:53.931926 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:53 crc kubenswrapper[4842]: I0202 08:02:53.995313 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:55 crc kubenswrapper[4842]: I0202 08:02:55.882431 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8tgjk" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" containerID="cri-o://b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" gracePeriod=2 Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.450442 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.538132 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") pod \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.538287 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") pod \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.538304 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") pod \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\" (UID: \"bc53b20b-5fc0-438a-869d-7e76e878d5ee\") " Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.539442 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities" (OuterVolumeSpecName: "utilities") pod "bc53b20b-5fc0-438a-869d-7e76e878d5ee" (UID: "bc53b20b-5fc0-438a-869d-7e76e878d5ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.543582 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j" (OuterVolumeSpecName: "kube-api-access-hwr9j") pod "bc53b20b-5fc0-438a-869d-7e76e878d5ee" (UID: "bc53b20b-5fc0-438a-869d-7e76e878d5ee"). InnerVolumeSpecName "kube-api-access-hwr9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.592006 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc53b20b-5fc0-438a-869d-7e76e878d5ee" (UID: "bc53b20b-5fc0-438a-869d-7e76e878d5ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.639981 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.640016 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc53b20b-5fc0-438a-869d-7e76e878d5ee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.640028 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwr9j\" (UniqueName: \"kubernetes.io/projected/bc53b20b-5fc0-438a-869d-7e76e878d5ee-kube-api-access-hwr9j\") on node \"crc\" DevicePath \"\"" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892766 4842 generic.go:334] "Generic (PLEG): container finished" podID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" exitCode=0 Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892808 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858"} Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892842 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8tgjk" event={"ID":"bc53b20b-5fc0-438a-869d-7e76e878d5ee","Type":"ContainerDied","Data":"f7a5d6b272b6c311a39a8e9ddc101fbb1653df24c1f041cb73a7bef8806bea46"} Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892861 4842 scope.go:117] "RemoveContainer" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.892886 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8tgjk" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.927078 4842 scope.go:117] "RemoveContainer" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.962880 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.962962 4842 scope.go:117] "RemoveContainer" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" Feb 02 08:02:56 crc kubenswrapper[4842]: I0202 08:02:56.977659 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8tgjk"] Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.000748 4842 scope.go:117] "RemoveContainer" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" Feb 02 08:02:57 crc kubenswrapper[4842]: E0202 08:02:57.001327 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858\": container with ID starting with b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858 not found: ID does not exist" containerID="b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.001386 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858"} err="failed to get container status \"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858\": rpc error: code = NotFound desc = could not find container \"b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858\": container with ID starting with b38a0f525e34109d528da023856088ebc31983057cf34f454a9669800e3e5858 not found: ID does not exist" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.001425 4842 scope.go:117] "RemoveContainer" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" Feb 02 08:02:57 crc kubenswrapper[4842]: E0202 08:02:57.003595 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215\": container with ID starting with 632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215 not found: ID does not exist" containerID="632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.003641 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215"} err="failed to get container status \"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215\": rpc error: code = NotFound desc = could not find container \"632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215\": container with ID starting with 632410db9e02b65d861e76302c172938d95c92d213cf6ad65798a7c600010215 not found: ID does not exist" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.003668 4842 scope.go:117] "RemoveContainer" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" Feb 02 08:02:57 crc kubenswrapper[4842]: E0202 08:02:57.003894 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8\": container with ID starting with 67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8 not found: ID does not exist" containerID="67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.003914 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8"} err="failed to get container status \"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8\": rpc error: code = NotFound desc = could not find container \"67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8\": container with ID starting with 67354cd05b48eb0f969928e72d9a5a002f76c7c6ac1495e11291d4e86f731ae8 not found: ID does not exist" Feb 02 08:02:57 crc kubenswrapper[4842]: I0202 08:02:57.448065 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" path="/var/lib/kubelet/pods/bc53b20b-5fc0-438a-869d-7e76e878d5ee/volumes" Feb 02 08:04:12 crc kubenswrapper[4842]: I0202 08:04:12.146245 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:04:12 crc kubenswrapper[4842]: I0202 08:04:12.146919 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:04:42 crc kubenswrapper[4842]: I0202 08:04:42.146658 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:04:42 crc kubenswrapper[4842]: I0202 08:04:42.147715 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.146318 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.146977 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.147039 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.147860 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.147958 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0" gracePeriod=600 Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.324559 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0" exitCode=0 Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.324647 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0"} Feb 02 08:05:12 crc kubenswrapper[4842]: I0202 08:05:12.325056 4842 scope.go:117] "RemoveContainer" containerID="899e8bfb0c36681dc9584a4ab1412579a8d65cee232ae2b3eea4d82962340f90" Feb 02 08:05:13 crc kubenswrapper[4842]: I0202 08:05:13.344671 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877"} Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.355887 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:41 crc kubenswrapper[4842]: E0202 08:06:41.357046 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357069 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" Feb 02 08:06:41 crc kubenswrapper[4842]: E0202 08:06:41.357100 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-utilities" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357113 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-utilities" Feb 02 08:06:41 crc kubenswrapper[4842]: E0202 08:06:41.357149 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-content" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357161 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="extract-content" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.357439 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc53b20b-5fc0-438a-869d-7e76e878d5ee" containerName="registry-server" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.368996 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.369192 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.498495 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.498700 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.498786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600346 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600503 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600576 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.600984 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.601544 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.641164 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"community-operators-vr5fq\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:41 crc kubenswrapper[4842]: I0202 08:06:41.704211 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:42 crc kubenswrapper[4842]: I0202 08:06:42.274283 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:43 crc kubenswrapper[4842]: I0202 08:06:43.241061 4842 generic.go:334] "Generic (PLEG): container finished" podID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" exitCode=0 Feb 02 08:06:43 crc kubenswrapper[4842]: I0202 08:06:43.241151 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6"} Feb 02 08:06:43 crc kubenswrapper[4842]: I0202 08:06:43.241529 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerStarted","Data":"88515406a5d093a4bdc5e334b46779b8f1794b92de340f3e5ab6bb4d8b6cc1d7"} Feb 02 08:06:45 crc kubenswrapper[4842]: I0202 08:06:45.260274 4842 generic.go:334] "Generic (PLEG): container finished" podID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" exitCode=0 Feb 02 08:06:45 crc kubenswrapper[4842]: I0202 08:06:45.260365 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8"} Feb 02 08:06:46 crc kubenswrapper[4842]: I0202 08:06:46.272267 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerStarted","Data":"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e"} Feb 02 08:06:46 crc kubenswrapper[4842]: I0202 08:06:46.302907 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vr5fq" podStartSLOduration=2.801329031 podStartE2EDuration="5.302882383s" podCreationTimestamp="2026-02-02 08:06:41 +0000 UTC" firstStartedPulling="2026-02-02 08:06:43.243570981 +0000 UTC m=+4828.620838933" lastFinishedPulling="2026-02-02 08:06:45.745124363 +0000 UTC m=+4831.122392285" observedRunningTime="2026-02-02 08:06:46.293853219 +0000 UTC m=+4831.671121191" watchObservedRunningTime="2026-02-02 08:06:46.302882383 +0000 UTC m=+4831.680150325" Feb 02 08:06:51 crc kubenswrapper[4842]: I0202 08:06:51.705118 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:51 crc kubenswrapper[4842]: I0202 08:06:51.705812 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:51 crc kubenswrapper[4842]: I0202 08:06:51.780126 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:52 crc kubenswrapper[4842]: I0202 08:06:52.403201 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:52 crc kubenswrapper[4842]: I0202 08:06:52.472100 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:54 crc kubenswrapper[4842]: I0202 08:06:54.344420 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vr5fq" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" containerID="cri-o://084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" gracePeriod=2 Feb 02 08:06:54 crc kubenswrapper[4842]: I0202 08:06:54.814199 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.013769 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") pod \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.013880 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") pod \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.013928 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") pod \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\" (UID: \"0a97006d-5b38-4131-8ed8-fe834ec55b0c\") " Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.015600 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities" (OuterVolumeSpecName: "utilities") pod "0a97006d-5b38-4131-8ed8-fe834ec55b0c" (UID: "0a97006d-5b38-4131-8ed8-fe834ec55b0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.024391 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj" (OuterVolumeSpecName: "kube-api-access-tcbsj") pod "0a97006d-5b38-4131-8ed8-fe834ec55b0c" (UID: "0a97006d-5b38-4131-8ed8-fe834ec55b0c"). InnerVolumeSpecName "kube-api-access-tcbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.116470 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.116523 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcbsj\" (UniqueName: \"kubernetes.io/projected/0a97006d-5b38-4131-8ed8-fe834ec55b0c-kube-api-access-tcbsj\") on node \"crc\" DevicePath \"\"" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.161451 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a97006d-5b38-4131-8ed8-fe834ec55b0c" (UID: "0a97006d-5b38-4131-8ed8-fe834ec55b0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.217871 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a97006d-5b38-4131-8ed8-fe834ec55b0c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.373933 4842 generic.go:334] "Generic (PLEG): container finished" podID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" exitCode=0 Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.373990 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e"} Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.374034 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vr5fq" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.374082 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vr5fq" event={"ID":"0a97006d-5b38-4131-8ed8-fe834ec55b0c","Type":"ContainerDied","Data":"88515406a5d093a4bdc5e334b46779b8f1794b92de340f3e5ab6bb4d8b6cc1d7"} Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.374121 4842 scope.go:117] "RemoveContainer" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.409749 4842 scope.go:117] "RemoveContainer" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.417417 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.422302 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vr5fq"] Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.439640 4842 scope.go:117] "RemoveContainer" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.443543 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" path="/var/lib/kubelet/pods/0a97006d-5b38-4131-8ed8-fe834ec55b0c/volumes" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.477188 4842 scope.go:117] "RemoveContainer" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" Feb 02 08:06:55 crc kubenswrapper[4842]: E0202 08:06:55.478204 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e\": container with ID starting with 084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e not found: ID does not exist" containerID="084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.478302 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e"} err="failed to get container status \"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e\": rpc error: code = NotFound desc = could not find container \"084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e\": container with ID starting with 084616683f48c7863333226e0320ebb7a781660cb9f29b7e0bd9e87b7fb2833e not found: ID does not exist" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.478346 4842 scope.go:117] "RemoveContainer" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" Feb 02 08:06:55 crc kubenswrapper[4842]: E0202 08:06:55.478949 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8\": container with ID starting with 1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8 not found: ID does not exist" containerID="1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.479000 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8"} err="failed to get container status \"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8\": rpc error: code = NotFound desc = could not find container \"1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8\": container with ID starting with 1e9b31a5de2557e311a8266c07abbd83a6d11c87b8f9f6ce43241db611555db8 not found: ID does not exist" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.479033 4842 scope.go:117] "RemoveContainer" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" Feb 02 08:06:55 crc kubenswrapper[4842]: E0202 08:06:55.479519 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6\": container with ID starting with 74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6 not found: ID does not exist" containerID="74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6" Feb 02 08:06:55 crc kubenswrapper[4842]: I0202 08:06:55.479566 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6"} err="failed to get container status \"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6\": rpc error: code = NotFound desc = could not find container \"74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6\": container with ID starting with 74f1703a6a6a4310d099f57bf0076e4da1c35a812538321945987e97284039d6 not found: ID does not exist" Feb 02 08:07:12 crc kubenswrapper[4842]: I0202 08:07:12.147303 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:07:12 crc kubenswrapper[4842]: I0202 08:07:12.147919 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:07:42 crc kubenswrapper[4842]: I0202 08:07:42.145763 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:07:42 crc kubenswrapper[4842]: I0202 08:07:42.146439 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.146358 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.146987 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.147050 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.147784 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:08:12 crc kubenswrapper[4842]: I0202 08:08:12.147880 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" gracePeriod=600 Feb 02 08:08:12 crc kubenswrapper[4842]: E0202 08:08:12.295775 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.079028 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" exitCode=0 Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.079107 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877"} Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.079616 4842 scope.go:117] "RemoveContainer" containerID="86f88fc17737727d0ac05b52a5ad8fd0c7f09725b75fca2be56fc8f0d447e9f0" Feb 02 08:08:13 crc kubenswrapper[4842]: I0202 08:08:13.080950 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:13 crc kubenswrapper[4842]: E0202 08:08:13.081650 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:23 crc kubenswrapper[4842]: I0202 08:08:23.434062 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:23 crc kubenswrapper[4842]: E0202 08:08:23.435438 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:35 crc kubenswrapper[4842]: I0202 08:08:35.436456 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:35 crc kubenswrapper[4842]: E0202 08:08:35.437204 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:08:49 crc kubenswrapper[4842]: I0202 08:08:49.432989 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:08:49 crc kubenswrapper[4842]: E0202 08:08:49.433941 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:03 crc kubenswrapper[4842]: I0202 08:09:03.434905 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:03 crc kubenswrapper[4842]: E0202 08:09:03.436091 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.226621 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:16 crc kubenswrapper[4842]: E0202 08:09:16.228080 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-utilities" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228112 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-utilities" Feb 02 08:09:16 crc kubenswrapper[4842]: E0202 08:09:16.228138 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-content" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228187 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="extract-content" Feb 02 08:09:16 crc kubenswrapper[4842]: E0202 08:09:16.228258 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228280 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.228632 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a97006d-5b38-4131-8ed8-fe834ec55b0c" containerName="registry-server" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.230889 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.281653 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.346431 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.346529 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.346586 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.447431 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.447840 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.447899 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.448253 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.448463 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.484518 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"redhat-operators-8dcf7\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:16 crc kubenswrapper[4842]: I0202 08:09:16.573628 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.035757 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:17 crc kubenswrapper[4842]: W0202 08:09:17.038313 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41679c6_e0b1_4af3_9742_8e2a44d2c736.slice/crio-5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0 WatchSource:0}: Error finding container 5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0: Status 404 returned error can't find the container with id 5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0 Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.434357 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:17 crc kubenswrapper[4842]: E0202 08:09:17.434633 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.654006 4842 generic.go:334] "Generic (PLEG): container finished" podID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" exitCode=0 Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.654097 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5"} Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.654547 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerStarted","Data":"5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0"} Feb 02 08:09:17 crc kubenswrapper[4842]: I0202 08:09:17.656363 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:09:18 crc kubenswrapper[4842]: I0202 08:09:18.667203 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerStarted","Data":"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a"} Feb 02 08:09:19 crc kubenswrapper[4842]: I0202 08:09:19.674882 4842 generic.go:334] "Generic (PLEG): container finished" podID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" exitCode=0 Feb 02 08:09:19 crc kubenswrapper[4842]: I0202 08:09:19.675107 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a"} Feb 02 08:09:20 crc kubenswrapper[4842]: I0202 08:09:20.718970 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8dcf7" podStartSLOduration=2.28792685 podStartE2EDuration="4.718936489s" podCreationTimestamp="2026-02-02 08:09:16 +0000 UTC" firstStartedPulling="2026-02-02 08:09:17.656023558 +0000 UTC m=+4983.033291480" lastFinishedPulling="2026-02-02 08:09:20.087033197 +0000 UTC m=+4985.464301119" observedRunningTime="2026-02-02 08:09:20.71777368 +0000 UTC m=+4986.095041622" watchObservedRunningTime="2026-02-02 08:09:20.718936489 +0000 UTC m=+4986.096204451" Feb 02 08:09:21 crc kubenswrapper[4842]: I0202 08:09:21.697853 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerStarted","Data":"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67"} Feb 02 08:09:26 crc kubenswrapper[4842]: I0202 08:09:26.578060 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:26 crc kubenswrapper[4842]: I0202 08:09:26.578635 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:27 crc kubenswrapper[4842]: I0202 08:09:27.642591 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8dcf7" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" probeResult="failure" output=< Feb 02 08:09:27 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 08:09:27 crc kubenswrapper[4842]: > Feb 02 08:09:29 crc kubenswrapper[4842]: I0202 08:09:29.434067 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:29 crc kubenswrapper[4842]: E0202 08:09:29.434936 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:36 crc kubenswrapper[4842]: I0202 08:09:36.655952 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:36 crc kubenswrapper[4842]: I0202 08:09:36.714734 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:36 crc kubenswrapper[4842]: I0202 08:09:36.903823 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:37 crc kubenswrapper[4842]: I0202 08:09:37.831089 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8dcf7" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" containerID="cri-o://40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" gracePeriod=2 Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.283210 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.387694 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") pod \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.387758 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") pod \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.387992 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") pod \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\" (UID: \"e41679c6-e0b1-4af3-9742-8e2a44d2c736\") " Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.390030 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities" (OuterVolumeSpecName: "utilities") pod "e41679c6-e0b1-4af3-9742-8e2a44d2c736" (UID: "e41679c6-e0b1-4af3-9742-8e2a44d2c736"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.394251 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk" (OuterVolumeSpecName: "kube-api-access-lxdpk") pod "e41679c6-e0b1-4af3-9742-8e2a44d2c736" (UID: "e41679c6-e0b1-4af3-9742-8e2a44d2c736"). InnerVolumeSpecName "kube-api-access-lxdpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.494150 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxdpk\" (UniqueName: \"kubernetes.io/projected/e41679c6-e0b1-4af3-9742-8e2a44d2c736-kube-api-access-lxdpk\") on node \"crc\" DevicePath \"\"" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.494204 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.574937 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e41679c6-e0b1-4af3-9742-8e2a44d2c736" (UID: "e41679c6-e0b1-4af3-9742-8e2a44d2c736"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.595948 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e41679c6-e0b1-4af3-9742-8e2a44d2c736-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843205 4842 generic.go:334] "Generic (PLEG): container finished" podID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" exitCode=0 Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843280 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67"} Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843306 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8dcf7" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843352 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8dcf7" event={"ID":"e41679c6-e0b1-4af3-9742-8e2a44d2c736","Type":"ContainerDied","Data":"5581f9691098a63423a4c1a8df4ef94f81968e6e21d32efbf3ba93853d05b9e0"} Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.843386 4842 scope.go:117] "RemoveContainer" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.881089 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.881426 4842 scope.go:117] "RemoveContainer" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.888949 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8dcf7"] Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.913347 4842 scope.go:117] "RemoveContainer" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.939364 4842 scope.go:117] "RemoveContainer" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" Feb 02 08:09:38 crc kubenswrapper[4842]: E0202 08:09:38.940861 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67\": container with ID starting with 40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67 not found: ID does not exist" containerID="40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.940927 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67"} err="failed to get container status \"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67\": rpc error: code = NotFound desc = could not find container \"40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67\": container with ID starting with 40bd0d6145b4819d49a98c31d3167c4ca0d09bfb8187c750e6f81b817a98be67 not found: ID does not exist" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.940971 4842 scope.go:117] "RemoveContainer" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" Feb 02 08:09:38 crc kubenswrapper[4842]: E0202 08:09:38.941576 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a\": container with ID starting with d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a not found: ID does not exist" containerID="d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.941635 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a"} err="failed to get container status \"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a\": rpc error: code = NotFound desc = could not find container \"d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a\": container with ID starting with d934fbfa3d8254d2a8dad9f465dff7d420df4f26b94f8a3c9dec82c03fcbdd3a not found: ID does not exist" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.941676 4842 scope.go:117] "RemoveContainer" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" Feb 02 08:09:38 crc kubenswrapper[4842]: E0202 08:09:38.942126 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5\": container with ID starting with 86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5 not found: ID does not exist" containerID="86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5" Feb 02 08:09:38 crc kubenswrapper[4842]: I0202 08:09:38.942171 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5"} err="failed to get container status \"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5\": rpc error: code = NotFound desc = could not find container \"86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5\": container with ID starting with 86450190d719552bacf08fdef513235c3ac5663a33f7fea6fc7ebc7afe8988f5 not found: ID does not exist" Feb 02 08:09:39 crc kubenswrapper[4842]: I0202 08:09:39.448533 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" path="/var/lib/kubelet/pods/e41679c6-e0b1-4af3-9742-8e2a44d2c736/volumes" Feb 02 08:09:40 crc kubenswrapper[4842]: I0202 08:09:40.433585 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:40 crc kubenswrapper[4842]: E0202 08:09:40.433813 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:09:54 crc kubenswrapper[4842]: I0202 08:09:54.434345 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:09:54 crc kubenswrapper[4842]: E0202 08:09:54.435436 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:09 crc kubenswrapper[4842]: I0202 08:10:09.434091 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:09 crc kubenswrapper[4842]: E0202 08:10:09.434884 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:23 crc kubenswrapper[4842]: I0202 08:10:23.434645 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:23 crc kubenswrapper[4842]: E0202 08:10:23.435618 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:34 crc kubenswrapper[4842]: I0202 08:10:34.434350 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:34 crc kubenswrapper[4842]: E0202 08:10:34.435465 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:47 crc kubenswrapper[4842]: I0202 08:10:47.433526 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:47 crc kubenswrapper[4842]: E0202 08:10:47.434447 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:10:58 crc kubenswrapper[4842]: I0202 08:10:58.433850 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:10:58 crc kubenswrapper[4842]: E0202 08:10:58.435101 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:12 crc kubenswrapper[4842]: I0202 08:11:12.433386 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:12 crc kubenswrapper[4842]: E0202 08:11:12.436478 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:25 crc kubenswrapper[4842]: I0202 08:11:25.436980 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:25 crc kubenswrapper[4842]: E0202 08:11:25.437669 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:40 crc kubenswrapper[4842]: I0202 08:11:40.434347 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:40 crc kubenswrapper[4842]: E0202 08:11:40.436426 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:11:51 crc kubenswrapper[4842]: I0202 08:11:51.433897 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:11:51 crc kubenswrapper[4842]: E0202 08:11:51.434897 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:04 crc kubenswrapper[4842]: I0202 08:12:04.434541 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:04 crc kubenswrapper[4842]: E0202 08:12:04.435691 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.452844 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:18 crc kubenswrapper[4842]: E0202 08:12:18.453900 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.453921 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" Feb 02 08:12:18 crc kubenswrapper[4842]: E0202 08:12:18.453953 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-utilities" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.453965 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-utilities" Feb 02 08:12:18 crc kubenswrapper[4842]: E0202 08:12:18.453994 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-content" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.454007 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="extract-content" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.454281 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41679c6-e0b1-4af3-9742-8e2a44d2c736" containerName="registry-server" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.455973 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.473176 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.596211 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.596465 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.596535 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.697786 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698088 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698119 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698717 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.698976 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.721467 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"redhat-marketplace-jdh6g\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:18 crc kubenswrapper[4842]: I0202 08:12:18.781744 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.039989 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.407949 4842 generic.go:334] "Generic (PLEG): container finished" podID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" exitCode=0 Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.407987 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e"} Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.408010 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerStarted","Data":"7a8b719038fc8601b3c12eed556bae843f74499f666b80fc5215969cd88e23aa"} Feb 02 08:12:19 crc kubenswrapper[4842]: I0202 08:12:19.439892 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:19 crc kubenswrapper[4842]: E0202 08:12:19.440250 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:20 crc kubenswrapper[4842]: I0202 08:12:20.423051 4842 generic.go:334] "Generic (PLEG): container finished" podID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" exitCode=0 Feb 02 08:12:20 crc kubenswrapper[4842]: I0202 08:12:20.423152 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a"} Feb 02 08:12:21 crc kubenswrapper[4842]: I0202 08:12:21.447975 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerStarted","Data":"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5"} Feb 02 08:12:21 crc kubenswrapper[4842]: I0202 08:12:21.457556 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jdh6g" podStartSLOduration=2.021180834 podStartE2EDuration="3.457536613s" podCreationTimestamp="2026-02-02 08:12:18 +0000 UTC" firstStartedPulling="2026-02-02 08:12:19.409592666 +0000 UTC m=+5164.786860578" lastFinishedPulling="2026-02-02 08:12:20.845948405 +0000 UTC m=+5166.223216357" observedRunningTime="2026-02-02 08:12:21.455735748 +0000 UTC m=+5166.833003680" watchObservedRunningTime="2026-02-02 08:12:21.457536613 +0000 UTC m=+5166.834804535" Feb 02 08:12:28 crc kubenswrapper[4842]: I0202 08:12:28.782723 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:28 crc kubenswrapper[4842]: I0202 08:12:28.783169 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:28 crc kubenswrapper[4842]: I0202 08:12:28.854272 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:29 crc kubenswrapper[4842]: I0202 08:12:29.584153 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:29 crc kubenswrapper[4842]: I0202 08:12:29.648088 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:31 crc kubenswrapper[4842]: I0202 08:12:31.536461 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jdh6g" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" containerID="cri-o://85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" gracePeriod=2 Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.051351 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.108865 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") pod \"08fe0e32-0a1d-4dee-8242-5f813885ae92\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.109084 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") pod \"08fe0e32-0a1d-4dee-8242-5f813885ae92\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.109107 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") pod \"08fe0e32-0a1d-4dee-8242-5f813885ae92\" (UID: \"08fe0e32-0a1d-4dee-8242-5f813885ae92\") " Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.110555 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities" (OuterVolumeSpecName: "utilities") pod "08fe0e32-0a1d-4dee-8242-5f813885ae92" (UID: "08fe0e32-0a1d-4dee-8242-5f813885ae92"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.122526 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt" (OuterVolumeSpecName: "kube-api-access-x9vnt") pod "08fe0e32-0a1d-4dee-8242-5f813885ae92" (UID: "08fe0e32-0a1d-4dee-8242-5f813885ae92"). InnerVolumeSpecName "kube-api-access-x9vnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.141417 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08fe0e32-0a1d-4dee-8242-5f813885ae92" (UID: "08fe0e32-0a1d-4dee-8242-5f813885ae92"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.212019 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9vnt\" (UniqueName: \"kubernetes.io/projected/08fe0e32-0a1d-4dee-8242-5f813885ae92-kube-api-access-x9vnt\") on node \"crc\" DevicePath \"\"" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.212054 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.212069 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08fe0e32-0a1d-4dee-8242-5f813885ae92-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.434458 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.434871 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548453 4842 generic.go:334] "Generic (PLEG): container finished" podID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" exitCode=0 Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548503 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5"} Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548581 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jdh6g" event={"ID":"08fe0e32-0a1d-4dee-8242-5f813885ae92","Type":"ContainerDied","Data":"7a8b719038fc8601b3c12eed556bae843f74499f666b80fc5215969cd88e23aa"} Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.548611 4842 scope.go:117] "RemoveContainer" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.549342 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jdh6g" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.593157 4842 scope.go:117] "RemoveContainer" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.617280 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.626160 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jdh6g"] Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.647064 4842 scope.go:117] "RemoveContainer" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.679270 4842 scope.go:117] "RemoveContainer" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.679843 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5\": container with ID starting with 85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5 not found: ID does not exist" containerID="85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.679971 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5"} err="failed to get container status \"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5\": rpc error: code = NotFound desc = could not find container \"85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5\": container with ID starting with 85674eb676513f5c2ba51b70084f9f9ccfa37e6c634681d39a3c85668fdac0f5 not found: ID does not exist" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.680055 4842 scope.go:117] "RemoveContainer" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.680582 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a\": container with ID starting with 77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a not found: ID does not exist" containerID="77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.680624 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a"} err="failed to get container status \"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a\": rpc error: code = NotFound desc = could not find container \"77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a\": container with ID starting with 77d53c7a64a83a74b20f3a149e50f5da523b040fa07bae890a57f2d5db21ed2a not found: ID does not exist" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.680650 4842 scope.go:117] "RemoveContainer" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" Feb 02 08:12:32 crc kubenswrapper[4842]: E0202 08:12:32.681125 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e\": container with ID starting with d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e not found: ID does not exist" containerID="d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e" Feb 02 08:12:32 crc kubenswrapper[4842]: I0202 08:12:32.681159 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e"} err="failed to get container status \"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e\": rpc error: code = NotFound desc = could not find container \"d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e\": container with ID starting with d173222a1b93229a2167679b79ba0b7008b287f3887900e6d34137172b1e7d5e not found: ID does not exist" Feb 02 08:12:33 crc kubenswrapper[4842]: I0202 08:12:33.446072 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" path="/var/lib/kubelet/pods/08fe0e32-0a1d-4dee-8242-5f813885ae92/volumes" Feb 02 08:12:44 crc kubenswrapper[4842]: I0202 08:12:44.433569 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:44 crc kubenswrapper[4842]: E0202 08:12:44.435327 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.015183 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:12:57 crc kubenswrapper[4842]: E0202 08:12:57.016283 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-utilities" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016307 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-utilities" Feb 02 08:12:57 crc kubenswrapper[4842]: E0202 08:12:57.016328 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016341 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" Feb 02 08:12:57 crc kubenswrapper[4842]: E0202 08:12:57.016388 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-content" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016403 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="extract-content" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.016656 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="08fe0e32-0a1d-4dee-8242-5f813885ae92" containerName="registry-server" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.018425 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.050764 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.100243 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.100740 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.101011 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.202386 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.202451 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.202479 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.203317 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.203758 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.229188 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"certified-operators-jnhtv\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.341500 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:12:57 crc kubenswrapper[4842]: I0202 08:12:57.844281 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.434411 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:12:58 crc kubenswrapper[4842]: E0202 08:12:58.435123 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.762757 4842 generic.go:334] "Generic (PLEG): container finished" podID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" exitCode=0 Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.762801 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7"} Feb 02 08:12:58 crc kubenswrapper[4842]: I0202 08:12:58.762844 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerStarted","Data":"e91d24bafb4ba23512441976977d4c1e9d4f0c5bb10c4601f8ae397a869e1aac"} Feb 02 08:12:59 crc kubenswrapper[4842]: I0202 08:12:59.776333 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerStarted","Data":"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e"} Feb 02 08:13:00 crc kubenswrapper[4842]: I0202 08:13:00.787904 4842 generic.go:334] "Generic (PLEG): container finished" podID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" exitCode=0 Feb 02 08:13:00 crc kubenswrapper[4842]: I0202 08:13:00.787985 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e"} Feb 02 08:13:01 crc kubenswrapper[4842]: I0202 08:13:01.800682 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerStarted","Data":"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7"} Feb 02 08:13:01 crc kubenswrapper[4842]: I0202 08:13:01.823023 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jnhtv" podStartSLOduration=3.3910354 podStartE2EDuration="5.823007434s" podCreationTimestamp="2026-02-02 08:12:56 +0000 UTC" firstStartedPulling="2026-02-02 08:12:58.764158503 +0000 UTC m=+5204.141426415" lastFinishedPulling="2026-02-02 08:13:01.196130537 +0000 UTC m=+5206.573398449" observedRunningTime="2026-02-02 08:13:01.819458536 +0000 UTC m=+5207.196726458" watchObservedRunningTime="2026-02-02 08:13:01.823007434 +0000 UTC m=+5207.200275346" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.341700 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.342541 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.407892 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:07 crc kubenswrapper[4842]: I0202 08:13:07.923595 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:08 crc kubenswrapper[4842]: I0202 08:13:08.042009 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:13:09 crc kubenswrapper[4842]: I0202 08:13:09.871495 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jnhtv" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" containerID="cri-o://f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" gracePeriod=2 Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.428388 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.558987 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") pod \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.559107 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") pod \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.559372 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") pod \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\" (UID: \"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e\") " Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.560088 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities" (OuterVolumeSpecName: "utilities") pod "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" (UID: "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.561009 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.568363 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm" (OuterVolumeSpecName: "kube-api-access-hb4vm") pod "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" (UID: "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e"). InnerVolumeSpecName "kube-api-access-hb4vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.630668 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" (UID: "3332dfd1-239f-40e8-9ffa-b2dfb4c6422e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.661986 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.662018 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb4vm\" (UniqueName: \"kubernetes.io/projected/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e-kube-api-access-hb4vm\") on node \"crc\" DevicePath \"\"" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878349 4842 generic.go:334] "Generic (PLEG): container finished" podID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" exitCode=0 Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878390 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7"} Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878415 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jnhtv" event={"ID":"3332dfd1-239f-40e8-9ffa-b2dfb4c6422e","Type":"ContainerDied","Data":"e91d24bafb4ba23512441976977d4c1e9d4f0c5bb10c4601f8ae397a869e1aac"} Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878431 4842 scope.go:117] "RemoveContainer" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.878451 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jnhtv" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.904888 4842 scope.go:117] "RemoveContainer" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.915461 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.942845 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jnhtv"] Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.945386 4842 scope.go:117] "RemoveContainer" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.962625 4842 scope.go:117] "RemoveContainer" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" Feb 02 08:13:10 crc kubenswrapper[4842]: E0202 08:13:10.963075 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7\": container with ID starting with f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7 not found: ID does not exist" containerID="f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963118 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7"} err="failed to get container status \"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7\": rpc error: code = NotFound desc = could not find container \"f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7\": container with ID starting with f02b169bc6d4f81a6e1fcbcc36e6fbc71482f39df6e66ff1593f2aeefb59c4f7 not found: ID does not exist" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963145 4842 scope.go:117] "RemoveContainer" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" Feb 02 08:13:10 crc kubenswrapper[4842]: E0202 08:13:10.963618 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e\": container with ID starting with f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e not found: ID does not exist" containerID="f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963670 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e"} err="failed to get container status \"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e\": rpc error: code = NotFound desc = could not find container \"f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e\": container with ID starting with f681a043c6757347836bffe4624a97dab090d16c617ca0e5e56a88bb027ade1e not found: ID does not exist" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.963702 4842 scope.go:117] "RemoveContainer" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" Feb 02 08:13:10 crc kubenswrapper[4842]: E0202 08:13:10.964015 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7\": container with ID starting with 5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7 not found: ID does not exist" containerID="5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7" Feb 02 08:13:10 crc kubenswrapper[4842]: I0202 08:13:10.964040 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7"} err="failed to get container status \"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7\": rpc error: code = NotFound desc = could not find container \"5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7\": container with ID starting with 5eef8870234cc24f518c7d89449ab0f23cf589da5ca2e179ea111d23b60faed7 not found: ID does not exist" Feb 02 08:13:11 crc kubenswrapper[4842]: I0202 08:13:11.452347 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" path="/var/lib/kubelet/pods/3332dfd1-239f-40e8-9ffa-b2dfb4c6422e/volumes" Feb 02 08:13:12 crc kubenswrapper[4842]: I0202 08:13:12.434101 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:13:12 crc kubenswrapper[4842]: I0202 08:13:12.895115 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92"} Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.164867 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4"] Feb 02 08:15:00 crc kubenswrapper[4842]: E0202 08:15:00.165966 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-utilities" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.165990 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-utilities" Feb 02 08:15:00 crc kubenswrapper[4842]: E0202 08:15:00.166026 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-content" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.166043 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="extract-content" Feb 02 08:15:00 crc kubenswrapper[4842]: E0202 08:15:00.166080 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.166097 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.166422 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="3332dfd1-239f-40e8-9ffa-b2dfb4c6422e" containerName="registry-server" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.167155 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.171405 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.171429 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.178909 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4"] Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.331564 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.331671 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.331726 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.433606 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.433717 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.433788 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.434971 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.444867 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.471949 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"collect-profiles-29500335-ms6c4\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.493461 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:00 crc kubenswrapper[4842]: I0202 08:15:00.983237 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4"] Feb 02 08:15:01 crc kubenswrapper[4842]: I0202 08:15:01.936010 4842 generic.go:334] "Generic (PLEG): container finished" podID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerID="745375a6b9c71aa829798c0d75b999195e39d9afe60926fdd96735a190433847" exitCode=0 Feb 02 08:15:01 crc kubenswrapper[4842]: I0202 08:15:01.936070 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" event={"ID":"23e7ebd9-93a3-45db-8cff-07ae373b0879","Type":"ContainerDied","Data":"745375a6b9c71aa829798c0d75b999195e39d9afe60926fdd96735a190433847"} Feb 02 08:15:01 crc kubenswrapper[4842]: I0202 08:15:01.936344 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" event={"ID":"23e7ebd9-93a3-45db-8cff-07ae373b0879","Type":"ContainerStarted","Data":"cb919aeb619658ee7321a50a61278ab5a442098a4e9675114f6becb8ddc68c30"} Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.293174 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.390888 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") pod \"23e7ebd9-93a3-45db-8cff-07ae373b0879\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.390958 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") pod \"23e7ebd9-93a3-45db-8cff-07ae373b0879\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.391150 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") pod \"23e7ebd9-93a3-45db-8cff-07ae373b0879\" (UID: \"23e7ebd9-93a3-45db-8cff-07ae373b0879\") " Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.391959 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume" (OuterVolumeSpecName: "config-volume") pod "23e7ebd9-93a3-45db-8cff-07ae373b0879" (UID: "23e7ebd9-93a3-45db-8cff-07ae373b0879"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.399803 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm" (OuterVolumeSpecName: "kube-api-access-66qfm") pod "23e7ebd9-93a3-45db-8cff-07ae373b0879" (UID: "23e7ebd9-93a3-45db-8cff-07ae373b0879"). InnerVolumeSpecName "kube-api-access-66qfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.403564 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "23e7ebd9-93a3-45db-8cff-07ae373b0879" (UID: "23e7ebd9-93a3-45db-8cff-07ae373b0879"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.493171 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/23e7ebd9-93a3-45db-8cff-07ae373b0879-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.493203 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23e7ebd9-93a3-45db-8cff-07ae373b0879-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.493237 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66qfm\" (UniqueName: \"kubernetes.io/projected/23e7ebd9-93a3-45db-8cff-07ae373b0879-kube-api-access-66qfm\") on node \"crc\" DevicePath \"\"" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.955443 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" event={"ID":"23e7ebd9-93a3-45db-8cff-07ae373b0879","Type":"ContainerDied","Data":"cb919aeb619658ee7321a50a61278ab5a442098a4e9675114f6becb8ddc68c30"} Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.955498 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb919aeb619658ee7321a50a61278ab5a442098a4e9675114f6becb8ddc68c30" Feb 02 08:15:03 crc kubenswrapper[4842]: I0202 08:15:03.955601 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500335-ms6c4" Feb 02 08:15:04 crc kubenswrapper[4842]: I0202 08:15:04.386860 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 08:15:04 crc kubenswrapper[4842]: I0202 08:15:04.393571 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500290-4rjz7"] Feb 02 08:15:05 crc kubenswrapper[4842]: I0202 08:15:05.451915 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe" path="/var/lib/kubelet/pods/2bdba5b1-7ddc-46ce-940e-86eb0f02a9fe/volumes" Feb 02 08:15:12 crc kubenswrapper[4842]: I0202 08:15:12.145826 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:15:12 crc kubenswrapper[4842]: I0202 08:15:12.146168 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:15:22 crc kubenswrapper[4842]: I0202 08:15:22.819025 4842 scope.go:117] "RemoveContainer" containerID="8dbf1ff40ae24c1cb278330205be0fe8707c50279bf4f5b00c195cfdd226a43f" Feb 02 08:15:42 crc kubenswrapper[4842]: I0202 08:15:42.146673 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:15:42 crc kubenswrapper[4842]: I0202 08:15:42.147449 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.146261 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.147210 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.147322 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.148150 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.148285 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92" gracePeriod=600 Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.594661 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92" exitCode=0 Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.594725 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92"} Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.595139 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82"} Feb 02 08:16:12 crc kubenswrapper[4842]: I0202 08:16:12.595179 4842 scope.go:117] "RemoveContainer" containerID="428f1549244ba8123b219560e78f7f58c26b7e0820e61fab5c56cc6f8b1cf877" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.047975 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:16:58 crc kubenswrapper[4842]: E0202 08:16:58.048902 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerName="collect-profiles" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.048919 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerName="collect-profiles" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.049109 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="23e7ebd9-93a3-45db-8cff-07ae373b0879" containerName="collect-profiles" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.050362 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.065817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.177562 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.177688 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.177731 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.279334 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.279434 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.279478 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.280442 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.280533 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.298548 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"community-operators-9h4qp\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.372784 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:16:58 crc kubenswrapper[4842]: I0202 08:16:58.630107 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.165096 4842 generic.go:334] "Generic (PLEG): container finished" podID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" exitCode=0 Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.165195 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38"} Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.165600 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerStarted","Data":"f5e8a7529c8fe9e2eab927e7f8e49d3c717cd786f8001ac00969587b6bc359fb"} Feb 02 08:16:59 crc kubenswrapper[4842]: I0202 08:16:59.167390 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:17:00 crc kubenswrapper[4842]: I0202 08:17:00.178632 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerStarted","Data":"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e"} Feb 02 08:17:01 crc kubenswrapper[4842]: I0202 08:17:01.191329 4842 generic.go:334] "Generic (PLEG): container finished" podID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" exitCode=0 Feb 02 08:17:01 crc kubenswrapper[4842]: I0202 08:17:01.191415 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e"} Feb 02 08:17:03 crc kubenswrapper[4842]: I0202 08:17:03.219760 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerStarted","Data":"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4"} Feb 02 08:17:03 crc kubenswrapper[4842]: I0202 08:17:03.252298 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9h4qp" podStartSLOduration=2.582693617 podStartE2EDuration="5.252272985s" podCreationTimestamp="2026-02-02 08:16:58 +0000 UTC" firstStartedPulling="2026-02-02 08:16:59.166987694 +0000 UTC m=+5444.544255636" lastFinishedPulling="2026-02-02 08:17:01.836567062 +0000 UTC m=+5447.213835004" observedRunningTime="2026-02-02 08:17:03.247588999 +0000 UTC m=+5448.624856921" watchObservedRunningTime="2026-02-02 08:17:03.252272985 +0000 UTC m=+5448.629540907" Feb 02 08:17:08 crc kubenswrapper[4842]: I0202 08:17:08.372815 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:08 crc kubenswrapper[4842]: I0202 08:17:08.373359 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:08 crc kubenswrapper[4842]: I0202 08:17:08.423710 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:09 crc kubenswrapper[4842]: I0202 08:17:09.414192 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:09 crc kubenswrapper[4842]: I0202 08:17:09.509459 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.289880 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9h4qp" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" containerID="cri-o://5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" gracePeriod=2 Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.745574 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.895767 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") pod \"7082cb1f-29f3-4652-9b74-94e76fb391ed\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.895870 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") pod \"7082cb1f-29f3-4652-9b74-94e76fb391ed\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.895899 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") pod \"7082cb1f-29f3-4652-9b74-94e76fb391ed\" (UID: \"7082cb1f-29f3-4652-9b74-94e76fb391ed\") " Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.897086 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities" (OuterVolumeSpecName: "utilities") pod "7082cb1f-29f3-4652-9b74-94e76fb391ed" (UID: "7082cb1f-29f3-4652-9b74-94e76fb391ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.915167 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt" (OuterVolumeSpecName: "kube-api-access-xsmzt") pod "7082cb1f-29f3-4652-9b74-94e76fb391ed" (UID: "7082cb1f-29f3-4652-9b74-94e76fb391ed"). InnerVolumeSpecName "kube-api-access-xsmzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.955587 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7082cb1f-29f3-4652-9b74-94e76fb391ed" (UID: "7082cb1f-29f3-4652-9b74-94e76fb391ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.997066 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsmzt\" (UniqueName: \"kubernetes.io/projected/7082cb1f-29f3-4652-9b74-94e76fb391ed-kube-api-access-xsmzt\") on node \"crc\" DevicePath \"\"" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.997098 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:17:11 crc kubenswrapper[4842]: I0202 08:17:11.997112 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7082cb1f-29f3-4652-9b74-94e76fb391ed-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304494 4842 generic.go:334] "Generic (PLEG): container finished" podID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" exitCode=0 Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304580 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4"} Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304661 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9h4qp" event={"ID":"7082cb1f-29f3-4652-9b74-94e76fb391ed","Type":"ContainerDied","Data":"f5e8a7529c8fe9e2eab927e7f8e49d3c717cd786f8001ac00969587b6bc359fb"} Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304665 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9h4qp" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.304709 4842 scope.go:117] "RemoveContainer" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.348364 4842 scope.go:117] "RemoveContainer" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.371672 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.378837 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9h4qp"] Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.395838 4842 scope.go:117] "RemoveContainer" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.426082 4842 scope.go:117] "RemoveContainer" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" Feb 02 08:17:12 crc kubenswrapper[4842]: E0202 08:17:12.426738 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4\": container with ID starting with 5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4 not found: ID does not exist" containerID="5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.426803 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4"} err="failed to get container status \"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4\": rpc error: code = NotFound desc = could not find container \"5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4\": container with ID starting with 5cf857914d155327387a42d7bdec87defde07abc1802ce9c909d1fbc7aa2d5d4 not found: ID does not exist" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.426838 4842 scope.go:117] "RemoveContainer" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" Feb 02 08:17:12 crc kubenswrapper[4842]: E0202 08:17:12.427484 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e\": container with ID starting with 7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e not found: ID does not exist" containerID="7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.427524 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e"} err="failed to get container status \"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e\": rpc error: code = NotFound desc = could not find container \"7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e\": container with ID starting with 7089a25637172a448410ccbe3b6a44a9c800b4a619e3a73ae36b95d769e1307e not found: ID does not exist" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.427549 4842 scope.go:117] "RemoveContainer" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" Feb 02 08:17:12 crc kubenswrapper[4842]: E0202 08:17:12.428332 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38\": container with ID starting with a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38 not found: ID does not exist" containerID="a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38" Feb 02 08:17:12 crc kubenswrapper[4842]: I0202 08:17:12.428378 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38"} err="failed to get container status \"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38\": rpc error: code = NotFound desc = could not find container \"a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38\": container with ID starting with a976f6858cdbf3f1c590eacc5f8c9daad9c067c347932c3d2396018154650e38 not found: ID does not exist" Feb 02 08:17:13 crc kubenswrapper[4842]: I0202 08:17:13.452462 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" path="/var/lib/kubelet/pods/7082cb1f-29f3-4652-9b74-94e76fb391ed/volumes" Feb 02 08:18:12 crc kubenswrapper[4842]: I0202 08:18:12.146014 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:18:12 crc kubenswrapper[4842]: I0202 08:18:12.146690 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:18:42 crc kubenswrapper[4842]: I0202 08:18:42.146000 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:18:42 crc kubenswrapper[4842]: I0202 08:18:42.146656 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.145653 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146164 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146209 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146844 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.146910 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" gracePeriod=600 Feb 02 08:19:12 crc kubenswrapper[4842]: E0202 08:19:12.286707 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.732418 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" exitCode=0 Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.732483 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82"} Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.732532 4842 scope.go:117] "RemoveContainer" containerID="6352945da641e26d3a6dce83e21b103005cf80f344e8fe0d66b6a98e2b650f92" Feb 02 08:19:12 crc kubenswrapper[4842]: I0202 08:19:12.733453 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:12 crc kubenswrapper[4842]: E0202 08:19:12.733975 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:27 crc kubenswrapper[4842]: I0202 08:19:27.433693 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:27 crc kubenswrapper[4842]: E0202 08:19:27.434807 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.827643 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:31 crc kubenswrapper[4842]: E0202 08:19:31.829771 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-utilities" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.829810 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-utilities" Feb 02 08:19:31 crc kubenswrapper[4842]: E0202 08:19:31.829864 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-content" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.829883 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="extract-content" Feb 02 08:19:31 crc kubenswrapper[4842]: E0202 08:19:31.829934 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.829955 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.830347 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7082cb1f-29f3-4652-9b74-94e76fb391ed" containerName="registry-server" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.832647 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.845251 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.885145 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.885538 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.885746 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986414 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986483 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986523 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.986977 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:31 crc kubenswrapper[4842]: I0202 08:19:31.987078 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.017671 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"redhat-operators-l4pbx\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.200797 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.651853 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.907653 4842 generic.go:334] "Generic (PLEG): container finished" podID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" exitCode=0 Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.907765 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370"} Feb 02 08:19:32 crc kubenswrapper[4842]: I0202 08:19:32.907971 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerStarted","Data":"17d09c4717c193ff0f39559deea84c2b67b4b56124ea4abb01b5757dc66fa47f"} Feb 02 08:19:33 crc kubenswrapper[4842]: I0202 08:19:33.920286 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerStarted","Data":"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f"} Feb 02 08:19:34 crc kubenswrapper[4842]: I0202 08:19:34.931448 4842 generic.go:334] "Generic (PLEG): container finished" podID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" exitCode=0 Feb 02 08:19:34 crc kubenswrapper[4842]: I0202 08:19:34.931558 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f"} Feb 02 08:19:35 crc kubenswrapper[4842]: I0202 08:19:35.945678 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerStarted","Data":"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1"} Feb 02 08:19:35 crc kubenswrapper[4842]: I0202 08:19:35.980685 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4pbx" podStartSLOduration=2.55347175 podStartE2EDuration="4.980665026s" podCreationTimestamp="2026-02-02 08:19:31 +0000 UTC" firstStartedPulling="2026-02-02 08:19:32.909107122 +0000 UTC m=+5598.286375034" lastFinishedPulling="2026-02-02 08:19:35.336300358 +0000 UTC m=+5600.713568310" observedRunningTime="2026-02-02 08:19:35.976441831 +0000 UTC m=+5601.353709823" watchObservedRunningTime="2026-02-02 08:19:35.980665026 +0000 UTC m=+5601.357932948" Feb 02 08:19:38 crc kubenswrapper[4842]: I0202 08:19:38.433490 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:38 crc kubenswrapper[4842]: E0202 08:19:38.434436 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:42 crc kubenswrapper[4842]: I0202 08:19:42.201764 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:42 crc kubenswrapper[4842]: I0202 08:19:42.201884 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:43 crc kubenswrapper[4842]: I0202 08:19:43.267655 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4pbx" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" probeResult="failure" output=< Feb 02 08:19:43 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 08:19:43 crc kubenswrapper[4842]: > Feb 02 08:19:50 crc kubenswrapper[4842]: I0202 08:19:50.434288 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:19:50 crc kubenswrapper[4842]: E0202 08:19:50.435000 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:19:52 crc kubenswrapper[4842]: I0202 08:19:52.268729 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:52 crc kubenswrapper[4842]: I0202 08:19:52.328923 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:52 crc kubenswrapper[4842]: I0202 08:19:52.523445 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.091923 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4pbx" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" containerID="cri-o://d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" gracePeriod=2 Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.584746 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.918779 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") pod \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.918858 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") pod \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.918897 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") pod \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\" (UID: \"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7\") " Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.920121 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities" (OuterVolumeSpecName: "utilities") pod "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" (UID: "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:19:54 crc kubenswrapper[4842]: I0202 08:19:54.927012 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr" (OuterVolumeSpecName: "kube-api-access-mlqhr") pod "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" (UID: "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7"). InnerVolumeSpecName "kube-api-access-mlqhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.020696 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlqhr\" (UniqueName: \"kubernetes.io/projected/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-kube-api-access-mlqhr\") on node \"crc\" DevicePath \"\"" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.020735 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.098637 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" (UID: "62b266ca-ea4c-4fb2-a376-fb7ce2d341d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.100967 4842 generic.go:334] "Generic (PLEG): container finished" podID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" exitCode=0 Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101030 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1"} Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101046 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4pbx" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101065 4842 scope.go:117] "RemoveContainer" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.101056 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4pbx" event={"ID":"62b266ca-ea4c-4fb2-a376-fb7ce2d341d7","Type":"ContainerDied","Data":"17d09c4717c193ff0f39559deea84c2b67b4b56124ea4abb01b5757dc66fa47f"} Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.121707 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.135543 4842 scope.go:117] "RemoveContainer" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.156522 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.161465 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4pbx"] Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.183876 4842 scope.go:117] "RemoveContainer" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.209355 4842 scope.go:117] "RemoveContainer" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" Feb 02 08:19:55 crc kubenswrapper[4842]: E0202 08:19:55.209943 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1\": container with ID starting with d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1 not found: ID does not exist" containerID="d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210001 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1"} err="failed to get container status \"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1\": rpc error: code = NotFound desc = could not find container \"d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1\": container with ID starting with d0179fb461459dca8885f9993122c8d8bab088f71fdeafeadf0dc93f0ffd05d1 not found: ID does not exist" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210035 4842 scope.go:117] "RemoveContainer" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" Feb 02 08:19:55 crc kubenswrapper[4842]: E0202 08:19:55.210504 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f\": container with ID starting with 056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f not found: ID does not exist" containerID="056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210543 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f"} err="failed to get container status \"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f\": rpc error: code = NotFound desc = could not find container \"056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f\": container with ID starting with 056481dbcf76796892ba9d4b9d75af8fd5c86c57785da9a384f1e2725e7ebc7f not found: ID does not exist" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.210570 4842 scope.go:117] "RemoveContainer" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" Feb 02 08:19:55 crc kubenswrapper[4842]: E0202 08:19:55.211075 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370\": container with ID starting with 9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370 not found: ID does not exist" containerID="9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.211141 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370"} err="failed to get container status \"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370\": rpc error: code = NotFound desc = could not find container \"9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370\": container with ID starting with 9fbf6342f5aed563c2199345c3620453bcac644d49eb43961cc23dbb44e58370 not found: ID does not exist" Feb 02 08:19:55 crc kubenswrapper[4842]: I0202 08:19:55.450030 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" path="/var/lib/kubelet/pods/62b266ca-ea4c-4fb2-a376-fb7ce2d341d7/volumes" Feb 02 08:20:03 crc kubenswrapper[4842]: I0202 08:20:03.433730 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:03 crc kubenswrapper[4842]: E0202 08:20:03.434875 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:16 crc kubenswrapper[4842]: I0202 08:20:16.434076 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:16 crc kubenswrapper[4842]: E0202 08:20:16.435256 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:29 crc kubenswrapper[4842]: I0202 08:20:29.433793 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:29 crc kubenswrapper[4842]: E0202 08:20:29.434789 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:43 crc kubenswrapper[4842]: I0202 08:20:43.434011 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:43 crc kubenswrapper[4842]: E0202 08:20:43.434751 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:20:57 crc kubenswrapper[4842]: I0202 08:20:57.433797 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:20:57 crc kubenswrapper[4842]: E0202 08:20:57.434775 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:12 crc kubenswrapper[4842]: I0202 08:21:12.433635 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:12 crc kubenswrapper[4842]: E0202 08:21:12.434772 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:23 crc kubenswrapper[4842]: I0202 08:21:23.434425 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:23 crc kubenswrapper[4842]: E0202 08:21:23.435489 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:35 crc kubenswrapper[4842]: I0202 08:21:35.441081 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:35 crc kubenswrapper[4842]: E0202 08:21:35.442335 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:21:50 crc kubenswrapper[4842]: I0202 08:21:50.434305 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:21:50 crc kubenswrapper[4842]: E0202 08:21:50.437923 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:02 crc kubenswrapper[4842]: I0202 08:22:02.433874 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:02 crc kubenswrapper[4842]: E0202 08:22:02.435169 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:14 crc kubenswrapper[4842]: I0202 08:22:14.433626 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:14 crc kubenswrapper[4842]: E0202 08:22:14.434425 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:29 crc kubenswrapper[4842]: I0202 08:22:29.435299 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:29 crc kubenswrapper[4842]: E0202 08:22:29.436258 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:40 crc kubenswrapper[4842]: I0202 08:22:40.433859 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:40 crc kubenswrapper[4842]: E0202 08:22:40.434680 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:22:54 crc kubenswrapper[4842]: I0202 08:22:54.434176 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:22:54 crc kubenswrapper[4842]: E0202 08:22:54.435051 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:09 crc kubenswrapper[4842]: I0202 08:23:09.434095 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:09 crc kubenswrapper[4842]: E0202 08:23:09.435010 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.775494 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:10 crc kubenswrapper[4842]: E0202 08:23:10.776702 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-utilities" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.776736 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-utilities" Feb 02 08:23:10 crc kubenswrapper[4842]: E0202 08:23:10.776802 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.776821 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" Feb 02 08:23:10 crc kubenswrapper[4842]: E0202 08:23:10.776856 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-content" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.776875 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="extract-content" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.777318 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b266ca-ea4c-4fb2-a376-fb7ce2d341d7" containerName="registry-server" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.779873 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.793618 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.965478 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.965563 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:10 crc kubenswrapper[4842]: I0202 08:23:10.965695 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.066894 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.067001 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.067292 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.067961 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.068129 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.087702 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"redhat-marketplace-92n7v\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.136207 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.628002 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:11 crc kubenswrapper[4842]: W0202 08:23:11.635437 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc20ef3d7_41e2_462b_b3d1_3cc95f3463c6.slice/crio-1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525 WatchSource:0}: Error finding container 1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525: Status 404 returned error can't find the container with id 1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525 Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.837013 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerStarted","Data":"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4"} Feb 02 08:23:11 crc kubenswrapper[4842]: I0202 08:23:11.838549 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerStarted","Data":"1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525"} Feb 02 08:23:12 crc kubenswrapper[4842]: I0202 08:23:12.845870 4842 generic.go:334] "Generic (PLEG): container finished" podID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" exitCode=0 Feb 02 08:23:12 crc kubenswrapper[4842]: I0202 08:23:12.847412 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:23:12 crc kubenswrapper[4842]: I0202 08:23:12.845922 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4"} Feb 02 08:23:13 crc kubenswrapper[4842]: I0202 08:23:13.858548 4842 generic.go:334] "Generic (PLEG): container finished" podID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" exitCode=0 Feb 02 08:23:13 crc kubenswrapper[4842]: I0202 08:23:13.858641 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69"} Feb 02 08:23:14 crc kubenswrapper[4842]: I0202 08:23:14.869657 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerStarted","Data":"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8"} Feb 02 08:23:14 crc kubenswrapper[4842]: I0202 08:23:14.889713 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92n7v" podStartSLOduration=3.421169337 podStartE2EDuration="4.889692867s" podCreationTimestamp="2026-02-02 08:23:10 +0000 UTC" firstStartedPulling="2026-02-02 08:23:12.847054747 +0000 UTC m=+5818.224322679" lastFinishedPulling="2026-02-02 08:23:14.315578297 +0000 UTC m=+5819.692846209" observedRunningTime="2026-02-02 08:23:14.886068757 +0000 UTC m=+5820.263336689" watchObservedRunningTime="2026-02-02 08:23:14.889692867 +0000 UTC m=+5820.266960779" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.136902 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.137296 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.194170 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:21 crc kubenswrapper[4842]: I0202 08:23:21.999043 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:22 crc kubenswrapper[4842]: I0202 08:23:22.070921 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:23 crc kubenswrapper[4842]: I0202 08:23:23.943301 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92n7v" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" containerID="cri-o://236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" gracePeriod=2 Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.433390 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:24 crc kubenswrapper[4842]: E0202 08:23:24.434267 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.459029 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.612407 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") pod \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.612771 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") pod \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.613025 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") pod \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\" (UID: \"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6\") " Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.616064 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities" (OuterVolumeSpecName: "utilities") pod "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" (UID: "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.619388 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz" (OuterVolumeSpecName: "kube-api-access-6x4sz") pod "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" (UID: "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6"). InnerVolumeSpecName "kube-api-access-6x4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.633667 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" (UID: "c20ef3d7-41e2-462b-b3d1-3cc95f3463c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.714400 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.714433 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.714446 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4sz\" (UniqueName: \"kubernetes.io/projected/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6-kube-api-access-6x4sz\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953464 4842 generic.go:334] "Generic (PLEG): container finished" podID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" exitCode=0 Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953510 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8"} Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953575 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92n7v" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953621 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92n7v" event={"ID":"c20ef3d7-41e2-462b-b3d1-3cc95f3463c6","Type":"ContainerDied","Data":"1a3d8468bca2319f51a0af14455ac46c3ac8b3a7588ab0e949c33c3733199525"} Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.953648 4842 scope.go:117] "RemoveContainer" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" Feb 02 08:23:24 crc kubenswrapper[4842]: I0202 08:23:24.977638 4842 scope.go:117] "RemoveContainer" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.002367 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.006667 4842 scope.go:117] "RemoveContainer" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.009977 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92n7v"] Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.051974 4842 scope.go:117] "RemoveContainer" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" Feb 02 08:23:25 crc kubenswrapper[4842]: E0202 08:23:25.052396 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8\": container with ID starting with 236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8 not found: ID does not exist" containerID="236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052446 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8"} err="failed to get container status \"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8\": rpc error: code = NotFound desc = could not find container \"236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8\": container with ID starting with 236c841c262b577ad25fab8290d0a53ab008f9f0ead3db09c39198d77ebd2bd8 not found: ID does not exist" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052477 4842 scope.go:117] "RemoveContainer" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" Feb 02 08:23:25 crc kubenswrapper[4842]: E0202 08:23:25.052819 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69\": container with ID starting with 4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69 not found: ID does not exist" containerID="4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052844 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69"} err="failed to get container status \"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69\": rpc error: code = NotFound desc = could not find container \"4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69\": container with ID starting with 4a17829ab7175ef6fec4865e377bf261b160d5a01d295d9f43824e8f8e9fcf69 not found: ID does not exist" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.052860 4842 scope.go:117] "RemoveContainer" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" Feb 02 08:23:25 crc kubenswrapper[4842]: E0202 08:23:25.053263 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4\": container with ID starting with 1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4 not found: ID does not exist" containerID="1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.053289 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4"} err="failed to get container status \"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4\": rpc error: code = NotFound desc = could not find container \"1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4\": container with ID starting with 1dcb8f41d001f1870065b37079845cb7510abd923c73e90e8de10ea629515ad4 not found: ID does not exist" Feb 02 08:23:25 crc kubenswrapper[4842]: I0202 08:23:25.443508 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" path="/var/lib/kubelet/pods/c20ef3d7-41e2-462b-b3d1-3cc95f3463c6/volumes" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.205200 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:28 crc kubenswrapper[4842]: E0202 08:23:28.206371 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-content" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206407 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-content" Feb 02 08:23:28 crc kubenswrapper[4842]: E0202 08:23:28.206478 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-utilities" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206496 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="extract-utilities" Feb 02 08:23:28 crc kubenswrapper[4842]: E0202 08:23:28.206518 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206536 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.206875 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20ef3d7-41e2-462b-b3d1-3cc95f3463c6" containerName="registry-server" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.209330 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.222786 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.373363 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.373786 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.374005 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.475497 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.475588 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.475653 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.476060 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.476070 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.496207 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"certified-operators-4s5zq\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:28 crc kubenswrapper[4842]: I0202 08:23:28.549014 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.009467 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.449551 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.451091 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.452621 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qzj89"/"default-dockercfg-k7shm" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.453010 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qzj89"/"openshift-service-ca.crt" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.456678 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qzj89"/"kube-root-ca.crt" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.457646 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.598791 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.598904 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.699911 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.700110 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.700270 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.724270 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"must-gather-9skzq\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.765178 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.996686 4842 generic.go:334] "Generic (PLEG): container finished" podID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" exitCode=0 Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.996738 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7"} Feb 02 08:23:29 crc kubenswrapper[4842]: I0202 08:23:29.997066 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerStarted","Data":"cb6ecb9ed4cf1a283a792186876af6d935b160ffb9e9293bf5fd79f6d72b0634"} Feb 02 08:23:30 crc kubenswrapper[4842]: I0202 08:23:30.183982 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:23:30 crc kubenswrapper[4842]: W0202 08:23:30.194616 4842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d2d69ec_05f0_4d32_9003_71634c635ab6.slice/crio-0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf WatchSource:0}: Error finding container 0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf: Status 404 returned error can't find the container with id 0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf Feb 02 08:23:31 crc kubenswrapper[4842]: I0202 08:23:31.005689 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerStarted","Data":"0ffb4e25f0260fc66de621c9e29c0bf4056e64803573898136b690c80e7ff3cf"} Feb 02 08:23:31 crc kubenswrapper[4842]: I0202 08:23:31.009572 4842 generic.go:334] "Generic (PLEG): container finished" podID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" exitCode=0 Feb 02 08:23:31 crc kubenswrapper[4842]: I0202 08:23:31.009616 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c"} Feb 02 08:23:32 crc kubenswrapper[4842]: I0202 08:23:32.020924 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerStarted","Data":"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a"} Feb 02 08:23:32 crc kubenswrapper[4842]: I0202 08:23:32.049856 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4s5zq" podStartSLOduration=2.604437819 podStartE2EDuration="4.049840378s" podCreationTimestamp="2026-02-02 08:23:28 +0000 UTC" firstStartedPulling="2026-02-02 08:23:29.99839159 +0000 UTC m=+5835.375659502" lastFinishedPulling="2026-02-02 08:23:31.443794109 +0000 UTC m=+5836.821062061" observedRunningTime="2026-02-02 08:23:32.046471184 +0000 UTC m=+5837.423739176" watchObservedRunningTime="2026-02-02 08:23:32.049840378 +0000 UTC m=+5837.427108290" Feb 02 08:23:36 crc kubenswrapper[4842]: I0202 08:23:36.433596 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:36 crc kubenswrapper[4842]: E0202 08:23:36.434256 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:23:37 crc kubenswrapper[4842]: I0202 08:23:37.064491 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerStarted","Data":"b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d"} Feb 02 08:23:37 crc kubenswrapper[4842]: I0202 08:23:37.065248 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerStarted","Data":"e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12"} Feb 02 08:23:37 crc kubenswrapper[4842]: I0202 08:23:37.087360 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qzj89/must-gather-9skzq" podStartSLOduration=1.764619947 podStartE2EDuration="8.087335922s" podCreationTimestamp="2026-02-02 08:23:29 +0000 UTC" firstStartedPulling="2026-02-02 08:23:30.197144082 +0000 UTC m=+5835.574412034" lastFinishedPulling="2026-02-02 08:23:36.519860107 +0000 UTC m=+5841.897128009" observedRunningTime="2026-02-02 08:23:37.081351574 +0000 UTC m=+5842.458619516" watchObservedRunningTime="2026-02-02 08:23:37.087335922 +0000 UTC m=+5842.464603874" Feb 02 08:23:38 crc kubenswrapper[4842]: I0202 08:23:38.549465 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:38 crc kubenswrapper[4842]: I0202 08:23:38.549836 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:38 crc kubenswrapper[4842]: I0202 08:23:38.633124 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:39 crc kubenswrapper[4842]: I0202 08:23:39.142576 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:39 crc kubenswrapper[4842]: I0202 08:23:39.220032 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:41 crc kubenswrapper[4842]: I0202 08:23:41.090010 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4s5zq" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" containerID="cri-o://6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" gracePeriod=2 Feb 02 08:23:41 crc kubenswrapper[4842]: I0202 08:23:41.998756 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097530 4842 generic.go:334] "Generic (PLEG): container finished" podID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" exitCode=0 Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097576 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a"} Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097605 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4s5zq" event={"ID":"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f","Type":"ContainerDied","Data":"cb6ecb9ed4cf1a283a792186876af6d935b160ffb9e9293bf5fd79f6d72b0634"} Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097625 4842 scope.go:117] "RemoveContainer" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.097757 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4s5zq" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.103731 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") pod \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.103808 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") pod \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.103905 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") pod \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\" (UID: \"524ee812-fd5b-4a94-b4e7-6a26c9e52e7f\") " Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.105489 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities" (OuterVolumeSpecName: "utilities") pod "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" (UID: "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.112731 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8" (OuterVolumeSpecName: "kube-api-access-stnd8") pod "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" (UID: "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f"). InnerVolumeSpecName "kube-api-access-stnd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.113841 4842 scope.go:117] "RemoveContainer" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.143867 4842 scope.go:117] "RemoveContainer" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.155467 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" (UID: "524ee812-fd5b-4a94-b4e7-6a26c9e52e7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.173405 4842 scope.go:117] "RemoveContainer" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" Feb 02 08:23:42 crc kubenswrapper[4842]: E0202 08:23:42.173895 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a\": container with ID starting with 6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a not found: ID does not exist" containerID="6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.173937 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a"} err="failed to get container status \"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a\": rpc error: code = NotFound desc = could not find container \"6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a\": container with ID starting with 6a057d885aa6a535858a03923ddfde4b21f3995c6289edd3885366844a84ab4a not found: ID does not exist" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.173987 4842 scope.go:117] "RemoveContainer" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" Feb 02 08:23:42 crc kubenswrapper[4842]: E0202 08:23:42.174583 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c\": container with ID starting with 45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c not found: ID does not exist" containerID="45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.174607 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c"} err="failed to get container status \"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c\": rpc error: code = NotFound desc = could not find container \"45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c\": container with ID starting with 45580c490578fc85241fa10f976d4bf6ca664f05cc6212b4c54d6ffd83f69c0c not found: ID does not exist" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.174623 4842 scope.go:117] "RemoveContainer" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" Feb 02 08:23:42 crc kubenswrapper[4842]: E0202 08:23:42.174972 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7\": container with ID starting with 2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7 not found: ID does not exist" containerID="2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.174992 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7"} err="failed to get container status \"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7\": rpc error: code = NotFound desc = could not find container \"2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7\": container with ID starting with 2a63711adc6d57e32132aa965e0453f07c1cf5cf8c5457c9f42f8ec9a99976a7 not found: ID does not exist" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.205395 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.205427 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.205437 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stnd8\" (UniqueName: \"kubernetes.io/projected/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f-kube-api-access-stnd8\") on node \"crc\" DevicePath \"\"" Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.436860 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:42 crc kubenswrapper[4842]: I0202 08:23:42.444632 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4s5zq"] Feb 02 08:23:43 crc kubenswrapper[4842]: I0202 08:23:43.441481 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" path="/var/lib/kubelet/pods/524ee812-fd5b-4a94-b4e7-6a26c9e52e7f/volumes" Feb 02 08:23:50 crc kubenswrapper[4842]: I0202 08:23:50.433572 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:23:50 crc kubenswrapper[4842]: E0202 08:23:50.434169 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:24:01 crc kubenswrapper[4842]: I0202 08:24:01.434108 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:24:01 crc kubenswrapper[4842]: E0202 08:24:01.435052 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:24:13 crc kubenswrapper[4842]: I0202 08:24:13.434084 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:24:14 crc kubenswrapper[4842]: I0202 08:24:14.362415 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f"} Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.663565 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/util/0.log" Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.801895 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/util/0.log" Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.825939 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/pull/0.log" Feb 02 08:24:42 crc kubenswrapper[4842]: I0202 08:24:42.906158 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/pull/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.002985 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/pull/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.031722 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/util/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.032387 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6f0ceecaefb29a8e5801b760101e31cf6295f8d10236ea1e93fc043d1dv4xr_3d9034b5-b9d6-4e70-8cae-f6226cd41d78/extract/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.241751 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-jknjh_79c1d3d0-ca85-4bbf-a7a7-74d260b5d4b1/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.246024 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-stkw6_c679df42-e383-4a11-a50d-af9dbd4c4eb0/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.392731 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-4hrlz_bda41d33-cd37-4c4d-99d6-3808993000b4/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.458422 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-xq5nz_bd7497e1-afb6-44b5-8270-1021f837a65a/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.545499 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-96sfj_17af9a3f-7823-4340-bebc-e50e11807467/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.639718 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-skdgw_95850a5b-9e70-4f77-86ee-ff016eae6e7e/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.918369 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-b9qjw_a020d6c0-e749-4442-93e8-64a4c463e9d5/manager/0.log" Feb 02 08:24:43 crc kubenswrapper[4842]: I0202 08:24:43.942143 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-jmvqq_0222c7fe-6311-4445-bf7f-e43fcb5ec5f9/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.091470 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-nzz4p_46313c01-1f03-4185-b7c4-2da5420bd703/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.109461 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-kz2zn_590654af-c639-4e9d-b821-c6caa1016695/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.277099 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-nsf9v_bfe64bf6-fea9-4b04-b4ff-74fe4b9c2ece/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.349821 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-4zk9c_95d96e63-61f2-4d8d-be72-562384cb6f23/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.508713 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-wpm9z_60d10db6-9c42-471b-84fb-58e9c04c60fc/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.525676 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-c9lwb_b7d68fac-cffb-4dd6-8c1b-4537a3a36571/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.650248 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-86dfb79cc7qc9fb_5e7a9701-ed45-4289-8272-f850efbf1e75/manager/0.log" Feb 02 08:24:44 crc kubenswrapper[4842]: I0202 08:24:44.804680 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-757f46c65d-gfksg_3081c94c-e2f4-48b5-90b5-8bcc58234a9b/operator/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.032439 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5549s_e2e2a93a-9c50-4769-9983-e51f49c374d5/registry-server/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.183575 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-d8nns_255c38ec-b5b8-4017-94b8-93553884ed09/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.258998 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-qlxtv_58dd3197-be46-474d-84f5-c066a9483a52/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.432041 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zbqhn_1fffe017-3a94-4565-9778-ccea208aa8cc/operator/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.476002 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b6f655c79-bwmdm_6b1810ad-df0b-44b5-8ba8-953039b85411/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.713754 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-lbjfv_6344fbd8-d71a-4461-ad9a-ad71e339ba03/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.733269 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-q7vh6_7db6967e-a602-49a0-83f6-e1caff831173/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.950167 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-4q9m5_3fb9fda7-8167-4f3d-947b-3e002278ad99/manager/0.log" Feb 02 08:24:45 crc kubenswrapper[4842]: I0202 08:24:45.960443 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-4ndxm_de128384-b923-4536-a485-33e65a1b7e04/manager/0.log" Feb 02 08:25:05 crc kubenswrapper[4842]: I0202 08:25:05.695999 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gnmkq_99922ba3-dd03-4c94-9663-9c530f7b3ad0/control-plane-machine-set-operator/0.log" Feb 02 08:25:05 crc kubenswrapper[4842]: I0202 08:25:05.800519 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdspj_45dcaecb-f74e-4eaf-886a-28b6632f8d44/kube-rbac-proxy/0.log" Feb 02 08:25:05 crc kubenswrapper[4842]: I0202 08:25:05.860818 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-qdspj_45dcaecb-f74e-4eaf-886a-28b6632f8d44/machine-api-operator/0.log" Feb 02 08:25:19 crc kubenswrapper[4842]: I0202 08:25:19.833209 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-446xj_ffbe6b41-d1da-4aec-bbfd-376c2f53a962/cert-manager-controller/0.log" Feb 02 08:25:20 crc kubenswrapper[4842]: I0202 08:25:20.003913 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-j6288_d7710841-a6c0-41ce-a408-f5940ab76922/cert-manager-cainjector/0.log" Feb 02 08:25:20 crc kubenswrapper[4842]: I0202 08:25:20.050663 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hj9fx_466ec5f5-a1b9-439d-a9d6-d5dbbe8d16c9/cert-manager-webhook/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.087825 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-z2jg2_1875099f-a0f5-4ba0-b757-35755a6d0bcd/nmstate-console-plugin/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.193592 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-hrqrp_558d578f-dad2-4317-8efd-628e30fe306e/nmstate-handler/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.257146 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-h4nv5_a4c06cff-e4b9-41be-a253-b1bf70dc1dc8/kube-rbac-proxy/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.300733 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-h4nv5_a4c06cff-e4b9-41be-a253-b1bf70dc1dc8/nmstate-metrics/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.435613 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-6qznw_3e9d6ba3-9c88-4425-87b9-8a5abd664ce7/nmstate-operator/0.log" Feb 02 08:25:34 crc kubenswrapper[4842]: I0202 08:25:34.463699 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-ctgl4_a9864264-6d23-4a03-8464-6b52a81c01d1/nmstate-webhook/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.431649 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7h9kp_890c2fc6-f70e-47e4-8578-908ec14d719f/kube-rbac-proxy/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.622732 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.737041 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7h9kp_890c2fc6-f70e-47e4-8578-908ec14d719f/controller/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.823322 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.841046 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.846986 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:03 crc kubenswrapper[4842]: I0202 08:26:03.929012 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.104641 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.120069 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.150814 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.211268 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.338760 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.356813 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-frr-files/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.370629 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/cp-reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.420964 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/controller/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.550712 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/frr-metrics/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.552463 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/kube-rbac-proxy/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.621667 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/kube-rbac-proxy-frr/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.814956 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/reloader/0.log" Feb 02 08:26:04 crc kubenswrapper[4842]: I0202 08:26:04.818302 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-ksx75_412f3125-792a-4cb4-858e-e0376903066a/frr-k8s-webhook-server/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.008607 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74749cc964-2p2rc_b3b00acd-6687-457f-8744-7057f840e5bd/manager/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.216200 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f569b8d8f-wvbf9_793714c2-9e47-4e82-a201-e2e8ac9d7bff/webhook-server/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.242175 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-74hmd_3016a0a1-abd6-486a-af0b-cf4c7b8db672/kube-rbac-proxy/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.799921 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-74hmd_3016a0a1-abd6-486a-af0b-cf4c7b8db672/speaker/0.log" Feb 02 08:26:05 crc kubenswrapper[4842]: I0202 08:26:05.984411 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fvmtq_79110fb7-d2a2-4330-ab4b-d717a7b943e6/frr/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.232670 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.394266 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.427996 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.456621 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.638295 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.638334 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/extract/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.659715 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc7hkrp_bb4e0f2b-3826-4669-8732-05eb885adfe5/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.802404 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/util/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.983619 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/pull/0.log" Feb 02 08:26:21 crc kubenswrapper[4842]: I0202 08:26:21.994827 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.006586 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.205758 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.207058 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/extract/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.225146 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713rz47n_7e244b75-9c3a-4f20-9bd7-071fb2cc7883/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.403755 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.650690 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.663601 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.701122 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.903369 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/util/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.910932 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/pull/0.log" Feb 02 08:26:22 crc kubenswrapper[4842]: I0202 08:26:22.969438 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5927nw_68358186-3b13-493a-9141-c206629af46e/extract/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.106065 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-utilities/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.292290 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-content/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.303853 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-content/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.480423 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-utilities/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.627667 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-utilities/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.645764 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/extract-content/0.log" Feb 02 08:26:23 crc kubenswrapper[4842]: I0202 08:26:23.842436 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.021681 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.044091 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7hzjr_940dd57b-92a3-4e95-b3b4-5df0efe013b1/registry-server/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.049851 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.077627 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.215926 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.254006 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.476939 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vbb7f_57f599bc-2735-4763-8510-fe623d36bd10/marketplace-operator/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.610372 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d9hpw_6af4d552-478d-4a9f-8fcb-8a4b30a29f61/registry-server/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.621586 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.704998 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-utilities/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.709155 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.803762 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-content/0.log" Feb 02 08:26:24 crc kubenswrapper[4842]: I0202 08:26:24.966827 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.028234 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.059283 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/extract-content/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.120909 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sw8ll_7ea1df1c-0a15-44a8-9bb6-9f4513c3b482/registry-server/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.280661 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.293013 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-content/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.321732 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-content/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.448695 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-utilities/0.log" Feb 02 08:26:25 crc kubenswrapper[4842]: I0202 08:26:25.467191 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/extract-content/0.log" Feb 02 08:26:26 crc kubenswrapper[4842]: I0202 08:26:26.139958 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-l6tg7_23620448-86fc-4fa7-9295-d9ce6de9b8e6/registry-server/0.log" Feb 02 08:26:42 crc kubenswrapper[4842]: I0202 08:26:42.146096 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:26:42 crc kubenswrapper[4842]: I0202 08:26:42.146608 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:27:12 crc kubenswrapper[4842]: I0202 08:27:12.146789 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:27:12 crc kubenswrapper[4842]: I0202 08:27:12.147479 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:27:36 crc kubenswrapper[4842]: I0202 08:27:36.970356 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerID="e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12" exitCode=0 Feb 02 08:27:36 crc kubenswrapper[4842]: I0202 08:27:36.970517 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qzj89/must-gather-9skzq" event={"ID":"0d2d69ec-05f0-4d32-9003-71634c635ab6","Type":"ContainerDied","Data":"e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12"} Feb 02 08:27:36 crc kubenswrapper[4842]: I0202 08:27:36.974640 4842 scope.go:117] "RemoveContainer" containerID="e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12" Feb 02 08:27:37 crc kubenswrapper[4842]: I0202 08:27:37.565637 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/gather/0.log" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.146760 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.147619 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.147703 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.148932 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:27:42 crc kubenswrapper[4842]: I0202 08:27:42.149052 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f" gracePeriod=600 Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.021723 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f" exitCode=0 Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.021817 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f"} Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.023037 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerStarted","Data":"f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5"} Feb 02 08:27:43 crc kubenswrapper[4842]: I0202 08:27:43.023088 4842 scope.go:117] "RemoveContainer" containerID="61b6479311d3a8372c85b950dee10be1af98216f468c2e676d0e31d4f2fc3e82" Feb 02 08:27:44 crc kubenswrapper[4842]: I0202 08:27:44.886404 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:27:44 crc kubenswrapper[4842]: I0202 08:27:44.886980 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qzj89/must-gather-9skzq" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" containerID="cri-o://b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d" gracePeriod=2 Feb 02 08:27:44 crc kubenswrapper[4842]: I0202 08:27:44.892384 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qzj89/must-gather-9skzq"] Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.041457 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/copy/0.log" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.042022 4842 generic.go:334] "Generic (PLEG): container finished" podID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerID="b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d" exitCode=143 Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.356066 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/copy/0.log" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.356970 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.492155 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") pod \"0d2d69ec-05f0-4d32-9003-71634c635ab6\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.492294 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") pod \"0d2d69ec-05f0-4d32-9003-71634c635ab6\" (UID: \"0d2d69ec-05f0-4d32-9003-71634c635ab6\") " Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.497925 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8" (OuterVolumeSpecName: "kube-api-access-kcjz8") pod "0d2d69ec-05f0-4d32-9003-71634c635ab6" (UID: "0d2d69ec-05f0-4d32-9003-71634c635ab6"). InnerVolumeSpecName "kube-api-access-kcjz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.588154 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0d2d69ec-05f0-4d32-9003-71634c635ab6" (UID: "0d2d69ec-05f0-4d32-9003-71634c635ab6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.593984 4842 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d2d69ec-05f0-4d32-9003-71634c635ab6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 02 08:27:45 crc kubenswrapper[4842]: I0202 08:27:45.594129 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcjz8\" (UniqueName: \"kubernetes.io/projected/0d2d69ec-05f0-4d32-9003-71634c635ab6-kube-api-access-kcjz8\") on node \"crc\" DevicePath \"\"" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.049640 4842 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qzj89_must-gather-9skzq_0d2d69ec-05f0-4d32-9003-71634c635ab6/copy/0.log" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.051128 4842 scope.go:117] "RemoveContainer" containerID="b501e90b320415eedc57d5d97621c4286482ad34559763a80d58ed79fe0c298d" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.051211 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qzj89/must-gather-9skzq" Feb 02 08:27:46 crc kubenswrapper[4842]: I0202 08:27:46.068493 4842 scope.go:117] "RemoveContainer" containerID="e6a8709be9b242969c88ec63b2238ff790746bf3bcc9e5f6c743f53912a02b12" Feb 02 08:27:47 crc kubenswrapper[4842]: I0202 08:27:47.466734 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" path="/var/lib/kubelet/pods/0d2d69ec-05f0-4d32-9003-71634c635ab6/volumes" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.005542 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007043 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-utilities" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007077 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-utilities" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007113 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="gather" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007131 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="gather" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007156 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007175 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007250 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-content" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007271 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="extract-content" Feb 02 08:28:16 crc kubenswrapper[4842]: E0202 08:28:16.007298 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007316 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007704 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="524ee812-fd5b-4a94-b4e7-6a26c9e52e7f" containerName="registry-server" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007746 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="copy" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.007808 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2d69ec-05f0-4d32-9003-71634c635ab6" containerName="gather" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.010091 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.022817 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.201151 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.201321 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.201436 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.302748 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.302888 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.302991 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.303463 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.303548 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.329924 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"community-operators-t4q45\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.336930 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:16 crc kubenswrapper[4842]: I0202 08:28:16.820115 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.320083 4842 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" exitCode=0 Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.320126 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8"} Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.320157 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerStarted","Data":"e42607ea0f6d3823bc1171920d5109e1351ec0075eb8ab4a58b83dc6b1509c46"} Feb 02 08:28:17 crc kubenswrapper[4842]: I0202 08:28:17.322254 4842 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 02 08:28:18 crc kubenswrapper[4842]: I0202 08:28:18.327890 4842 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" exitCode=0 Feb 02 08:28:18 crc kubenswrapper[4842]: I0202 08:28:18.327966 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0"} Feb 02 08:28:19 crc kubenswrapper[4842]: I0202 08:28:19.340649 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerStarted","Data":"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4"} Feb 02 08:28:19 crc kubenswrapper[4842]: I0202 08:28:19.379206 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t4q45" podStartSLOduration=2.931236997 podStartE2EDuration="4.379187848s" podCreationTimestamp="2026-02-02 08:28:15 +0000 UTC" firstStartedPulling="2026-02-02 08:28:17.321970688 +0000 UTC m=+6122.699238610" lastFinishedPulling="2026-02-02 08:28:18.769921539 +0000 UTC m=+6124.147189461" observedRunningTime="2026-02-02 08:28:19.376671546 +0000 UTC m=+6124.753939458" watchObservedRunningTime="2026-02-02 08:28:19.379187848 +0000 UTC m=+6124.756455760" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.338005 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.338589 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.398284 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.464774 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:26 crc kubenswrapper[4842]: I0202 08:28:26.658458 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:28 crc kubenswrapper[4842]: I0202 08:28:28.413570 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t4q45" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" containerID="cri-o://36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" gracePeriod=2 Feb 02 08:28:28 crc kubenswrapper[4842]: I0202 08:28:28.895259 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.010415 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") pod \"5ca6a629-8605-4947-ab91-0a91b960ae4d\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.010522 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") pod \"5ca6a629-8605-4947-ab91-0a91b960ae4d\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.010581 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") pod \"5ca6a629-8605-4947-ab91-0a91b960ae4d\" (UID: \"5ca6a629-8605-4947-ab91-0a91b960ae4d\") " Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.011922 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities" (OuterVolumeSpecName: "utilities") pod "5ca6a629-8605-4947-ab91-0a91b960ae4d" (UID: "5ca6a629-8605-4947-ab91-0a91b960ae4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.023358 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj" (OuterVolumeSpecName: "kube-api-access-tjkzj") pod "5ca6a629-8605-4947-ab91-0a91b960ae4d" (UID: "5ca6a629-8605-4947-ab91-0a91b960ae4d"). InnerVolumeSpecName "kube-api-access-tjkzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.089783 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ca6a629-8605-4947-ab91-0a91b960ae4d" (UID: "5ca6a629-8605-4947-ab91-0a91b960ae4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.112371 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjkzj\" (UniqueName: \"kubernetes.io/projected/5ca6a629-8605-4947-ab91-0a91b960ae4d-kube-api-access-tjkzj\") on node \"crc\" DevicePath \"\"" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.112419 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.112430 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ca6a629-8605-4947-ab91-0a91b960ae4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.426425 4842 generic.go:334] "Generic (PLEG): container finished" podID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" exitCode=0 Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.426506 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4"} Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.426660 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t4q45" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.427762 4842 scope.go:117] "RemoveContainer" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.427735 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t4q45" event={"ID":"5ca6a629-8605-4947-ab91-0a91b960ae4d","Type":"ContainerDied","Data":"e42607ea0f6d3823bc1171920d5109e1351ec0075eb8ab4a58b83dc6b1509c46"} Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.451882 4842 scope.go:117] "RemoveContainer" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.502279 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.510710 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t4q45"] Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.528208 4842 scope.go:117] "RemoveContainer" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.548234 4842 scope.go:117] "RemoveContainer" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" Feb 02 08:28:29 crc kubenswrapper[4842]: E0202 08:28:29.548732 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4\": container with ID starting with 36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4 not found: ID does not exist" containerID="36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.548773 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4"} err="failed to get container status \"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4\": rpc error: code = NotFound desc = could not find container \"36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4\": container with ID starting with 36535fa55f952d763a4d4e1704726c72236a829e17b55c06328ff3a50a69daa4 not found: ID does not exist" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.548794 4842 scope.go:117] "RemoveContainer" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" Feb 02 08:28:29 crc kubenswrapper[4842]: E0202 08:28:29.549254 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0\": container with ID starting with 0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0 not found: ID does not exist" containerID="0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.549366 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0"} err="failed to get container status \"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0\": rpc error: code = NotFound desc = could not find container \"0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0\": container with ID starting with 0fb154988ca5730623d9730ee0a05e01116bf37369ed50109c1aa9e4fda75cc0 not found: ID does not exist" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.549483 4842 scope.go:117] "RemoveContainer" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" Feb 02 08:28:29 crc kubenswrapper[4842]: E0202 08:28:29.549890 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8\": container with ID starting with 87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8 not found: ID does not exist" containerID="87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8" Feb 02 08:28:29 crc kubenswrapper[4842]: I0202 08:28:29.549910 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8"} err="failed to get container status \"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8\": rpc error: code = NotFound desc = could not find container \"87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8\": container with ID starting with 87505d1d8d5aac6ffec05c084e84a766abdecc06bbb9e48ffcc8ed8218ccbfa8 not found: ID does not exist" Feb 02 08:28:31 crc kubenswrapper[4842]: I0202 08:28:31.445279 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" path="/var/lib/kubelet/pods/5ca6a629-8605-4947-ab91-0a91b960ae4d/volumes" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.885896 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:37 crc kubenswrapper[4842]: E0202 08:29:37.887029 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-content" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887051 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-content" Feb 02 08:29:37 crc kubenswrapper[4842]: E0202 08:29:37.887076 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-utilities" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887086 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="extract-utilities" Feb 02 08:29:37 crc kubenswrapper[4842]: E0202 08:29:37.887107 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887120 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.887368 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca6a629-8605-4947-ab91-0a91b960ae4d" containerName="registry-server" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.888888 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:37 crc kubenswrapper[4842]: I0202 08:29:37.905825 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.019863 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.019926 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.020000 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.121324 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.121405 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.121481 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.122061 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.122107 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.141931 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"redhat-operators-bjrt2\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.218141 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:38 crc kubenswrapper[4842]: I0202 08:29:38.681696 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:39 crc kubenswrapper[4842]: I0202 08:29:39.049431 4842 generic.go:334] "Generic (PLEG): container finished" podID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerID="a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61" exitCode=0 Feb 02 08:29:39 crc kubenswrapper[4842]: I0202 08:29:39.049534 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerDied","Data":"a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61"} Feb 02 08:29:39 crc kubenswrapper[4842]: I0202 08:29:39.051884 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerStarted","Data":"7e8d42d5b553d6226fc54f8d53a801fa505d6e4a86c12d3ef9b9e632e565ecb7"} Feb 02 08:29:41 crc kubenswrapper[4842]: I0202 08:29:41.071747 4842 generic.go:334] "Generic (PLEG): container finished" podID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerID="772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe" exitCode=0 Feb 02 08:29:41 crc kubenswrapper[4842]: I0202 08:29:41.072074 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerDied","Data":"772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe"} Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.084950 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerStarted","Data":"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe"} Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.111898 4842 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bjrt2" podStartSLOduration=2.641197586 podStartE2EDuration="5.111881457s" podCreationTimestamp="2026-02-02 08:29:37 +0000 UTC" firstStartedPulling="2026-02-02 08:29:39.050939427 +0000 UTC m=+6204.428207339" lastFinishedPulling="2026-02-02 08:29:41.521623298 +0000 UTC m=+6206.898891210" observedRunningTime="2026-02-02 08:29:42.109370545 +0000 UTC m=+6207.486638527" watchObservedRunningTime="2026-02-02 08:29:42.111881457 +0000 UTC m=+6207.489149379" Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.146521 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:29:42 crc kubenswrapper[4842]: I0202 08:29:42.146601 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:29:48 crc kubenswrapper[4842]: I0202 08:29:48.218550 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:48 crc kubenswrapper[4842]: I0202 08:29:48.219451 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:49 crc kubenswrapper[4842]: I0202 08:29:49.295346 4842 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bjrt2" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="registry-server" probeResult="failure" output=< Feb 02 08:29:49 crc kubenswrapper[4842]: timeout: failed to connect service ":50051" within 1s Feb 02 08:29:49 crc kubenswrapper[4842]: > Feb 02 08:29:58 crc kubenswrapper[4842]: I0202 08:29:58.294744 4842 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:58 crc kubenswrapper[4842]: I0202 08:29:58.374008 4842 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:29:58 crc kubenswrapper[4842]: I0202 08:29:58.536549 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:29:59 crc kubenswrapper[4842]: I0202 08:29:59.488512 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bjrt2" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="registry-server" containerID="cri-o://2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe" gracePeriod=2 Feb 02 08:29:59 crc kubenswrapper[4842]: I0202 08:29:59.972876 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.139353 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") pod \"74e3e32c-fe39-4064-8d82-25720d7e23a3\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.139432 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") pod \"74e3e32c-fe39-4064-8d82-25720d7e23a3\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.139535 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") pod \"74e3e32c-fe39-4064-8d82-25720d7e23a3\" (UID: \"74e3e32c-fe39-4064-8d82-25720d7e23a3\") " Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.141619 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities" (OuterVolumeSpecName: "utilities") pod "74e3e32c-fe39-4064-8d82-25720d7e23a3" (UID: "74e3e32c-fe39-4064-8d82-25720d7e23a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.151196 4842 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8"] Feb 02 08:30:00 crc kubenswrapper[4842]: E0202 08:30:00.151493 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="extract-content" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.151506 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="extract-content" Feb 02 08:30:00 crc kubenswrapper[4842]: E0202 08:30:00.151532 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="registry-server" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.151542 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="registry-server" Feb 02 08:30:00 crc kubenswrapper[4842]: E0202 08:30:00.151555 4842 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="extract-utilities" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.151562 4842 state_mem.go:107] "Deleted CPUSet assignment" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="extract-utilities" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.151715 4842 memory_manager.go:354] "RemoveStaleState removing state" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerName="registry-server" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.152103 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.153656 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9" (OuterVolumeSpecName: "kube-api-access-8wsd9") pod "74e3e32c-fe39-4064-8d82-25720d7e23a3" (UID: "74e3e32c-fe39-4064-8d82-25720d7e23a3"). InnerVolumeSpecName "kube-api-access-8wsd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.154342 4842 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.154524 4842 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.171728 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8"] Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.241082 4842 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.241117 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wsd9\" (UniqueName: \"kubernetes.io/projected/74e3e32c-fe39-4064-8d82-25720d7e23a3-kube-api-access-8wsd9\") on node \"crc\" DevicePath \"\"" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.280752 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74e3e32c-fe39-4064-8d82-25720d7e23a3" (UID: "74e3e32c-fe39-4064-8d82-25720d7e23a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.341985 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8ada11-1330-4382-9baf-4f812677477e-config-volume\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.342359 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8c6l\" (UniqueName: \"kubernetes.io/projected/bf8ada11-1330-4382-9baf-4f812677477e-kube-api-access-h8c6l\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.342489 4842 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf8ada11-1330-4382-9baf-4f812677477e-secret-volume\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.342637 4842 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74e3e32c-fe39-4064-8d82-25720d7e23a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.444094 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8c6l\" (UniqueName: \"kubernetes.io/projected/bf8ada11-1330-4382-9baf-4f812677477e-kube-api-access-h8c6l\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.444154 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf8ada11-1330-4382-9baf-4f812677477e-secret-volume\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.444250 4842 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8ada11-1330-4382-9baf-4f812677477e-config-volume\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.447858 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8ada11-1330-4382-9baf-4f812677477e-config-volume\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.450990 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf8ada11-1330-4382-9baf-4f812677477e-secret-volume\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.464954 4842 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8c6l\" (UniqueName: \"kubernetes.io/projected/bf8ada11-1330-4382-9baf-4f812677477e-kube-api-access-h8c6l\") pod \"collect-profiles-29500350-gljq8\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.494885 4842 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.499750 4842 generic.go:334] "Generic (PLEG): container finished" podID="74e3e32c-fe39-4064-8d82-25720d7e23a3" containerID="2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe" exitCode=0 Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.499781 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bjrt2" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.499804 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerDied","Data":"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe"} Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.500159 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bjrt2" event={"ID":"74e3e32c-fe39-4064-8d82-25720d7e23a3","Type":"ContainerDied","Data":"7e8d42d5b553d6226fc54f8d53a801fa505d6e4a86c12d3ef9b9e632e565ecb7"} Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.500175 4842 scope.go:117] "RemoveContainer" containerID="2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.545188 4842 scope.go:117] "RemoveContainer" containerID="772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.545835 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.551619 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bjrt2"] Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.574077 4842 scope.go:117] "RemoveContainer" containerID="a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.626726 4842 scope.go:117] "RemoveContainer" containerID="2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe" Feb 02 08:30:00 crc kubenswrapper[4842]: E0202 08:30:00.627107 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe\": container with ID starting with 2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe not found: ID does not exist" containerID="2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.627138 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe"} err="failed to get container status \"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe\": rpc error: code = NotFound desc = could not find container \"2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe\": container with ID starting with 2879985c1ce00aee5c3ce7da62e25c98102344357d85dd1ea2938ff9c57985fe not found: ID does not exist" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.627163 4842 scope.go:117] "RemoveContainer" containerID="772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe" Feb 02 08:30:00 crc kubenswrapper[4842]: E0202 08:30:00.627757 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe\": container with ID starting with 772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe not found: ID does not exist" containerID="772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.627810 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe"} err="failed to get container status \"772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe\": rpc error: code = NotFound desc = could not find container \"772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe\": container with ID starting with 772314ec322feb1888a8d6ba7ca8203f5376518b679ed1132ccdfeb12bc07fbe not found: ID does not exist" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.627876 4842 scope.go:117] "RemoveContainer" containerID="a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61" Feb 02 08:30:00 crc kubenswrapper[4842]: E0202 08:30:00.628296 4842 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61\": container with ID starting with a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61 not found: ID does not exist" containerID="a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.628339 4842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61"} err="failed to get container status \"a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61\": rpc error: code = NotFound desc = could not find container \"a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61\": container with ID starting with a9529cb8628da9d301f481cc1aeb393f4776996a3b2b7ee6ce68fab6d5102a61 not found: ID does not exist" Feb 02 08:30:00 crc kubenswrapper[4842]: I0202 08:30:00.943806 4842 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8"] Feb 02 08:30:01 crc kubenswrapper[4842]: I0202 08:30:01.445425 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74e3e32c-fe39-4064-8d82-25720d7e23a3" path="/var/lib/kubelet/pods/74e3e32c-fe39-4064-8d82-25720d7e23a3/volumes" Feb 02 08:30:01 crc kubenswrapper[4842]: I0202 08:30:01.505872 4842 generic.go:334] "Generic (PLEG): container finished" podID="bf8ada11-1330-4382-9baf-4f812677477e" containerID="7a760ea55ddf2e97d87ae202039dbea9646ab96c3c45e70b6dd8093486120832" exitCode=0 Feb 02 08:30:01 crc kubenswrapper[4842]: I0202 08:30:01.505928 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" event={"ID":"bf8ada11-1330-4382-9baf-4f812677477e","Type":"ContainerDied","Data":"7a760ea55ddf2e97d87ae202039dbea9646ab96c3c45e70b6dd8093486120832"} Feb 02 08:30:01 crc kubenswrapper[4842]: I0202 08:30:01.505951 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" event={"ID":"bf8ada11-1330-4382-9baf-4f812677477e","Type":"ContainerStarted","Data":"dad9f7d3314f075a718033cb91509d67b85d146f6730fe496766ca5541209685"} Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.848809 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.981920 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf8ada11-1330-4382-9baf-4f812677477e-secret-volume\") pod \"bf8ada11-1330-4382-9baf-4f812677477e\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.981972 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8c6l\" (UniqueName: \"kubernetes.io/projected/bf8ada11-1330-4382-9baf-4f812677477e-kube-api-access-h8c6l\") pod \"bf8ada11-1330-4382-9baf-4f812677477e\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.982079 4842 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8ada11-1330-4382-9baf-4f812677477e-config-volume\") pod \"bf8ada11-1330-4382-9baf-4f812677477e\" (UID: \"bf8ada11-1330-4382-9baf-4f812677477e\") " Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.982972 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf8ada11-1330-4382-9baf-4f812677477e-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf8ada11-1330-4382-9baf-4f812677477e" (UID: "bf8ada11-1330-4382-9baf-4f812677477e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.988949 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf8ada11-1330-4382-9baf-4f812677477e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf8ada11-1330-4382-9baf-4f812677477e" (UID: "bf8ada11-1330-4382-9baf-4f812677477e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 02 08:30:02 crc kubenswrapper[4842]: I0202 08:30:02.992457 4842 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf8ada11-1330-4382-9baf-4f812677477e-kube-api-access-h8c6l" (OuterVolumeSpecName: "kube-api-access-h8c6l") pod "bf8ada11-1330-4382-9baf-4f812677477e" (UID: "bf8ada11-1330-4382-9baf-4f812677477e"). InnerVolumeSpecName "kube-api-access-h8c6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.084751 4842 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8ada11-1330-4382-9baf-4f812677477e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.084799 4842 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf8ada11-1330-4382-9baf-4f812677477e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.084820 4842 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8c6l\" (UniqueName: \"kubernetes.io/projected/bf8ada11-1330-4382-9baf-4f812677477e-kube-api-access-h8c6l\") on node \"crc\" DevicePath \"\"" Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.523110 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" event={"ID":"bf8ada11-1330-4382-9baf-4f812677477e","Type":"ContainerDied","Data":"dad9f7d3314f075a718033cb91509d67b85d146f6730fe496766ca5541209685"} Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.523170 4842 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dad9f7d3314f075a718033cb91509d67b85d146f6730fe496766ca5541209685" Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.523197 4842 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29500350-gljq8" Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.947332 4842 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 08:30:03 crc kubenswrapper[4842]: I0202 08:30:03.956310 4842 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29500305-fx7vn"] Feb 02 08:30:05 crc kubenswrapper[4842]: I0202 08:30:05.447207 4842 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6" path="/var/lib/kubelet/pods/7aa0f9fa-efa5-4afa-bce6-88ca1eeef6b6/volumes" Feb 02 08:30:12 crc kubenswrapper[4842]: I0202 08:30:12.146341 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:30:12 crc kubenswrapper[4842]: I0202 08:30:12.146987 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:30:23 crc kubenswrapper[4842]: I0202 08:30:23.177677 4842 scope.go:117] "RemoveContainer" containerID="9c91867e37901f6b77d290214bde0cb71563f9ff02b28875bfa2c96b8d680083" Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.146981 4842 patch_prober.go:28] interesting pod/machine-config-daemon-p5hqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.147710 4842 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.147778 4842 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.148610 4842 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5"} pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.148708 4842 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" containerName="machine-config-daemon" containerID="cri-o://f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5" gracePeriod=600 Feb 02 08:30:42 crc kubenswrapper[4842]: E0202 08:30:42.279585 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.919660 4842 generic.go:334] "Generic (PLEG): container finished" podID="0cc6e593-198e-4709-9026-103f892be5ff" containerID="f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5" exitCode=0 Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.919698 4842 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" event={"ID":"0cc6e593-198e-4709-9026-103f892be5ff","Type":"ContainerDied","Data":"f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5"} Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.920090 4842 scope.go:117] "RemoveContainer" containerID="701d661caf384deb6b8444b74ed46fa7b3bf20ba994db92caac6b1a337d1e11f" Feb 02 08:30:42 crc kubenswrapper[4842]: I0202 08:30:42.920656 4842 scope.go:117] "RemoveContainer" containerID="f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5" Feb 02 08:30:42 crc kubenswrapper[4842]: E0202 08:30:42.920952 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" Feb 02 08:30:55 crc kubenswrapper[4842]: I0202 08:30:55.438864 4842 scope.go:117] "RemoveContainer" containerID="f01c1d4f45a6891b006202538e283e03c804cd552c7b9e7ccd0a0ff087cc1df5" Feb 02 08:30:55 crc kubenswrapper[4842]: E0202 08:30:55.440886 4842 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p5hqr_openshift-machine-config-operator(0cc6e593-198e-4709-9026-103f892be5ff)\"" pod="openshift-machine-config-operator/machine-config-daemon-p5hqr" podUID="0cc6e593-198e-4709-9026-103f892be5ff" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515140060320024434 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015140060321017352 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015140043145016502 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015140043145015452 5ustar corecore